/hdg/ - Stable Diffusion

Anime girls generated with AI

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 15.00 MB

Total max file size: 50.00 MB

Max files: 4

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

US Election Thread

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

hdg/ #11 Anonymous 03/24/2023 (Fri) 14:07:09 No. 12556
Previous thread >>11276
>>13710 Updating with the update-torch script didn't work either. I'll just stay on the old one then.
>>13740 what is hll anyway
>>13742 hololive/vtuber finetune model made by an anon from /vt/ very good quality base for mixing and basedmix anon uses it for his model merge
>>13729 You will need the carbox script, but these have been uploaded with carbox, so the prompt should be included
(1.29 MB 1024x1024 catbox_8p9r87.png)

does anyone know where to find any of these loras? - blade-128d-512x - ponsuke_(pon00000) - multi_minigirl_e6 lora I don't know if im just missing some list that has them or not. (the gayshit list says it has minigirl but it was removed from the mega) image as payment
(1.93 MB 1024x1536 catbox_rempm6.png)

(2.14 MB 3840x4096 1680615978607670.jpg)

Who is the artist lora for this? The poster won't say but I assume he is here too
Are (positive) embeds still part of the meta? I saw a discussion about it on /g/ and it got me thinking about experimenting with them and maybe even giving training a couple a go.
>>13748 they are more or less placebo that add random noise and skew your generation style
>>13749 >placebo meh, what a shame
>>13751 Should've mentioned technically LoRAs apply both positive and negative weights which is why you mentioning (positive) brought this to mind
what are some of the current "base" realism models along the lines of F222, Basil, cafe-instagram? I want to experiment with some mixes. And what would be the best way or settings to experiment with when merging real models together?
>>13754 the forbidden model
>>13755 ...except that one
>>13754 id say chillout mix, I think its good but its very catered toward asian faces/features so if you don't want your characters looking like a Korean MMO mobile ad don't use it
>>13745 ah yeah, blade is my lora, it's a bit old and I want to rebake it to not be dim128, but link: https://mega.nz/folder/CEwWDADZ#qzQPU8zj7Bf_j3sp_UeiqQ/folder/iYwwQQpA mine is still on the gayshit repo, it's the v2 version I baked a long time ago
>>13758 ahh makes sense, the filenames didn't match so I wasn't sure it'd be the same one. cute pics earlier btw, I was toying around with them as a base earlier >>13746 tyvm!
Just as I was finishing up on an update to the scripts this update drops... 4 Apr. 2023, 2023/4/4: There may be bugs because I changed a lot. If you cannot revert the script to the previous version when a problem occurs, please wait for the update for a while. The learning rate and dim (rank) of each block may not work with other modules (LyCORIS, etc.) because the module needs to be changed. Fix some bugs and add some features. Fix an issue that .json format dataset config files cannot be read. issue #351 Thanks to rockerBOO! Raise an error when an invalid --lr_warmup_steps option is specified (when warmup is not valid for the specified scheduler). PR #364 Thanks to shirayu! Add min_snr_gamma to metadata in train_network.py. PR #373 Thanks to rockerBOO! Fix the data type handling in fine_tune.py. This may fix an error that occurs in some environments when using xformers, npz format cache, and mixed_precision. Add options to train_network.py to specify block weights for learning rates. PR #355 Thanks to u-haru for the great contribution! Specify the weights of 25 blocks for the full model. No LoRA corresponds to the first block, but 25 blocks are specified for compatibility with 'LoRA block weight' etc. Also, if you do not expand to conv2d3x3, some blocks do not have LoRA, but please specify 25 values ​​for the argument for consistency. Specify the following arguments with --network_args. down_lr_weight : Specify the learning rate weight of the down blocks of U-Net. The following can be specified. The weight for each block: Specify 12 numbers such as "down_lr_weight=0,0,0,0,0,0,1,1,1,1,1,1". Specify from preset: Specify such as "down_lr_weight=sine" (the weights by sine curve). sine, cosine, linear, reverse_linear, zeros can be specified. Also, if you add +number such as "down_lr_weight=cosine+.25", the specified number is added (such as 0.25~1.25). mid_lr_weight : Specify the learning rate weight of the mid block of U-Net. Specify one number such as "down_lr_weight=0.5". up_lr_weight : Specify the learning rate weight of the up blocks of U-Net. The same as down_lr_weight. If you omit the some arguments, the 1.0 is used. Also, if you set the weight to 0, the LoRA modules of that block are not created. block_lr_zero_threshold : If the weight is not more than this value, the LoRA module is not created. The default is 0. Add options to train_network.py to specify block dims (ranks) for variable rank. Specify 25 values ​​for the full model of 25 blocks. Some blocks do not have LoRA, but specify 25 values ​​always. Specify the following arguments with --network_args. block_dims : Specify the dim (rank) of each block. Specify 25 numbers such as "block_dims=2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2". block_alphas : Specify the alpha of each block. Specify 25 numbers as with block_dims. If omitted, the value of network_alpha is used. conv_block_dims : Expand LoRA to Conv2d 3x3 and specify the dim (rank) of each block. conv_block_alphas : Specify the alpha of each block when expanding LoRA to Conv2d 3x3. If omitted, the value of conv_alpha is used. fucking hell, this is either gonna do nothing or completely change how everything is baked... hah, I'll get to work on updating it, seems like it'll be easy enough, though I'll have to drop LyCORIS support until it's updated, so no more loha for the time being. locon still work though because they are naturally supported. And I still need to make the xti script too... ok, lots of work ahead, that's fine, i'll do it, will have to make a new popup for the unet values though, 25 input boxes shaped like a U will probably make the most sense here. I'll get it to work, i'll make it easy enough to use for the end user. might take a day or so though, till then, I'll release the current update I was working on, the resize + locon extract queue update.
>>13760 glad you are enjoying something I made!
>>13761 nice, gonna wait for that update to get gamma for next bakes then I hope it won't force torch 2.X.0 anytime soon though since I can't get it working
>>13764 nah, I'll continue to support the older torch 1.13.1 because of compatibility issues, anyways, gotta write up a readme and release the current update.
>block weight for learning rates If I specify down_lr_weight = cosine, does this mean that IN00 will have weight 1 and IN11 will have weight 0? If that's the case, then I'm guessing the new meta will be down_lr_weight=cosine mid_lr_weight=0 up_lr_weight=cosine for styles and down_lr_weight=sine mid_lr_weight=1 up_lr_weight=sine for characters
>>13761 so was Kohakublueleaf's method for LOCONS never using blocks? Because this method of having to specify each dim for 25 blocks is insane, this is just going to make baking into a nightmare to see which one is the best.
>>13767 I think I'll go back to traditional LORAs at this rate, this new meta is going to require too much trial and error just to release a new LORAs
>>13767 If you have a good dataset, using the highest dim for all of them and then using resize is good enough. I think this is only useful if you have a shitty dataset where all images of your character is of the same style since you only used screenshots from the game they're in
>>13766 possibly, but it can probably be very useful to bake a lora on specific parts of an artists style as well, such as learning the eyes but not the rest or the lighting but nothing else. I can see it being pretty useful overall. >>13767 yeah, pretty much the biggest reason why I immediately frowned when I saw the update. though the good thing is it doesn't actually replace the normal way to train, we can just ignore it if you don't want to deal with it. I unfortunately do however because I'm sure some people would want to use it. >>13769 very good point, I can see a few uses for it, that is definitely one of them
>>13769 but even when I'm mixing with blocks for models the things I've seen and what others have described for what each block influences is "uh it kind of influences the lines or backgrounds" there isn't a clear answer, I make sure my datasets are good maybe I'll use this method if I'm dealing with a "single scene" LOCON but otherwise sucks I have to drop KohakuBlueLeaf's LOCON method now because it never used blocks properly
>>13771 it's only until kohaku updates it on their end, though you could definitely just use an earlier commit of the scripts still too
>>13772 I wonder what Kohakublueleaf's method was even doing differently compared to the traditional block method then, what did their convdim/convalpha influence along with the method Kohya used?
>>13773 I honestly have no clue, I didn't exactly look too far into it.
>>13649 No good free option yet and paid ones watermark their files to possible hunt you down later.


Forms
Delete
Report
Quick Reply