/hdg/ - Stable Diffusion

Anime girls generated with AI

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 15.00 MB

Total max file size: 50.00 MB

Max files: 4

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

hdg/ #11 Anonymous 03/24/2023 (Fri) 14:07:09 No. 12556
Previous thread >>11276
getting two 3090s for the price of a 4080 is pissing me off with how much GPUs are compared to back then, gamers just don't care anymore.
>>12557 >4080 that card is a fucking scam also gaymers have been cucked and gaslit so hard by jewvidia they fell for the crypto miner is the enemy meme and now that ETH miners are cucked, but nvidia cant blame anyone else for their shit cards and high prices anymore so people will start to care again.
>>12558 I just hope in a few years time we start having 4090s in the used market for great deals like I got with the 3090s kek
>>12529 I just realized that some artists are drawing her halo incorrectly... no wonder the halo was fucked
>10h 40m left on training wish i had a 48GB card because fuck this. Hoping the porn images I added helps out to finally make some coom with this model.
>>12561 finetuning is going to make me want a data server GPU isn't it...
>>12562 Depends how serious you wanna take this "hobby" i took a peak at the waifu diffusion discord and the model trainer showed off his a6000 x3 set up and that it was around 20k. Not sure what else hardware wise was in it nor if he already had it on hand or built it for training but he mentioned that hes still trying to recoup the cost.
(2.19 MB 1024x1536 catbox_vh47x8.png)

(7.64 MB 2304x2304 grid-1578.png)

>>12546 trying my luck with the concept how the fuck can you even train a lora so that the outputs always looks the 'same' like looking to the right with open mouth
>>12565 also if the catbox script anon is here: https://pastebin.com/pH8MYN9b what am I doing wrong this time or why does it refuse to work again?
does a libbitsandbytes_cuda118 exist? I wonder if it would make kohya stuff faster
>>12567 probably would need to be built, I don't think it exists
>>12569 I see, we got lucky with tritoncu118 so finding an anon that was able to build bitsandbytes with cu118 would have to be a similar case anyways found something cool https://civitai.com/models/23723
>>12570 oh that's pretty cool. I am 100% sure i'd have no clue how to build it, but I might look into it later down the line, just... don't expect anything to come from it
>>12571 it's just a what if desu, I honestly can only see it adding a 10 or 5% boost to my 3090s, 4090 chads will just get even more faster
Been busy for two days, what did I miss? Had to RMA my ram and find a new kit. I'm starting to think that I'm never gonna get this build done lmao
>>12557 lmao same here but I can't get another ftw3u yet
>>12573 Just pytorch updates for webui and sd-scripts, you have to do it yourself for the webui and for sd-scripts you can easily update with easyscripts
>>12575 only video I found about it kek, and this guy got xformers to work normally with the cu118 build https://youtu.be/pom3nQejaTs
decided to rebake Nilou. I hate how I start to notice every little imperfection compared to when I used to coom to every shitty base NAI gen a few months ago
(1.72 MB 1024x1536 catbox_n4j4co.png)

>>12577 NAI and Stable diffusion has "terrible" for me in general, I keep finding flaws in images drawn by artists I like.
>>12578 has been* reeeeeeeeee
(850.64 KB 768x768 catbox_pgzdat.png)

>>12578 AI isn't the only one shit at hands or details normal artists are just as bad at them
do we still have our ML anon that did Amber tests here? I'm wondering if the decay rate being set to 0.6 still works well with the betas that easyscripts has set (0.9/0.99)
I don't know how the /vt/ anons can do it but I can't bring myself to use the vtuber loras after watching some streams of them. Especially with how some of them are so completely unhinged and mentally ill that it's hard to imagine them in any sort of sexual lovey dovey situation.
>>12582 I just cum to them if they do lewd asmr or have hot designs, separate the menhera from the fap sessions
>>12583 https://www.youtube.com/watch?v=5F9Rzy2ZPfI It's hard bro, some of them are really far gone.
>>12584 >Shondo kek still remember that time before she got big after Fauna got into hololive when she was making narratives against her on 4chan when livestreaming
>>12582 Weak dick energy.
>>12584 Why does this one keep getting posted here? Is it the severe mental illness energy she emits? >>12585 No doubt going down the ny*nners history scrub route once she gets successful
(542.25 KB 512x768 catbox_t3a8gt.png)

(589.36 KB 768x512 catbox_lnldvy.png)

(637.81 KB 768x512 catbox_2qd7d1.png)

Tsukuyo Oono (Blue Archive) LoRA: https://files.catbox.moe/5fq8wo.safetensors Not enough shy rabbit art out there for a larger dataset, but its GoodEnough™.
>>12580 I keep having to remind people of this fact. If the AI learns from artists that can’t do feet or hands, the AI won’t do feet or hands good either.
>>12582 The thing that got me before I ever started hanging out on /vt/ is that I somehow knew a bunch of forbidden knowledge on a lot of them so… their models never did anything for me to get off.
(4.09 MB 1920x1280 catbox_5yoarw.png)

(3.89 MB 1920x1280 catbox_3p4y3q.png)

(4.19 MB 1920x1280 catbox_mcfe7n.png)

(3.75 MB 1920x1280 catbox_zwnh6f.png)

Need more rabbits
>>12591 oooo forbidden knowledge ooooooooo they're all real women and most of them are ugly and psycho OOOOOOOOOOOOO
Has nothing to do with stable diffusion but Internet Archive might get shut down soon so download your shit
>>12593 Lmao I don’t care that there is a real person behind the model. It’s just one of those things where if I see porn of chubbas it does nothing for me, I just move on to the next image to fap. >>12594 >Nooooo you can’t make an archive of all of my political lies and schizophrenic takes on twitter noooooooo
>>12594 >Internet Archive might get shut down soon man I hope they move to like Russia or somewhere where copyright laws can't get them oh well RIP, data hoarders win again
>>12594 They only lost a battle regarding their e-book lending which they are going to appeal on the grounds that (((Google’s))) digital library won their defense on the same grounds so the judge is probably some retarded boomer.
Did anon ever post the shunny lora?
(2.86 MB 1920x1280 catbox_tt6md4.png)

(2.48 MB 1920x1280 catbox_s8ye8i.png)

(2.97 MB 1920x1280 catbox_xsszj6.png)

(2.79 MB 1920x1280 catbox_4ygg5i.png)

>>12598 not yet, I baked it a 5th time before I got busy for the day, I'm gonna test it in a minute, post some gens here, then release it if I think it's good enough oh, and, it's not just shunny, it should be able to do shun as well... not that this board would care
>30 more minutes of training Shit this wait has been more annoying than usual
>>12599 Nice cyberpunk mari
(2.65 MB 1920x1280 catbox_86nvkn.png)

(2.92 MB 1920x1280 catbox_hjlftg.png)

>>12602 mari lookalike, as people stated, long story short, is my OC, Hifumi, but uh, yeah, she's very close in design
>>12603 it's funny actually, the sci-fi in the prompt completely removes the two-tone aspect of her hair, and unfortunately she is far enough from the camera that the gemstone green eyes don't usually appear on this prompt. Very much was expecting at least one person saying she is Mari, as it isn't the first time
>>12603 Yeah I know, I was just teasing. Outside the name memes, she's cute.
>>12605 ah I see, I keep forgetting that most people on this board already know of her. either way, thanks! I thought these were pretty good gens as well! I'm not entirely sure why, but I ended up going with small breasts instead of flat chest this time, despite her normal being flat chest
What line do I need to change to upgrade to torch 2.0?
(1.38 MB 1024x1536 00086-958371943.png)

(1.60 MB 1024x1536 00088-958371945.png)

(1.67 MB 1024x1536 00087-958371944.png)

(1.59 MB 1024x1536 00089-958371946.png)

shun test, halo is somewhere in 30-45% consistently the correct shape, too much variety on the halo in the dataset to be entirely correct though, even if the shape is correct. dress I generally correct, but is also a bit inconsistent, likely because it shares tags with shunny's dress, doesn't often get the flower pattern, but most of the dataset didn't have it so expected. little gemstone thing is a good 50%, acceptable. hair ornament is a good 85% accurate, though the dataset is a bit inconsistent as well, so expect it to mess up occasionally. purple inner hair seems generally consistent, purple hair tips are not, but that's fine. onto the shunny test these were the first gens I made with it, but the halo was less accurate as I genned more. it's consistent enough that I don't mind it though. (I didn't do extensive testing, but good first tests)
(1.15 MB 1024x1536 00106-213023752.png)

(1.65 MB 1024x1536 00107-213023753.png)

(1.13 MB 1024x1536 00108-213023754.png)

(1.25 MB 1024x1536 00109-213023755.png)

>>12608 shunny test, halo is more consistent, it seems, being able to get the correct shape a good 60-70% of the time dress is generally correct, missing the floral pattern usually, might be the position. hair ribbon is correct most of the time, so that's good. I don't think there is too much of an issue here, I'll probably gen some preview images and upload it to mega and civitai, once I do, I'd love to get some feedback, because I might have gotten really lucky, despite genning a good 20 images of each
>>12609 >I don't think there is too much of an issue here, I'll probably gen some preview images and upload it to mega and civitai, once I do, I'd love to get some feedback, because I might have gotten really lucky, despite genning a good 20 images of each I'll try it out a bit after you post it
>>12610 I just noticed that shun doesn't have any images in the dataset for sex, shunny most definitely does, but shun doesn't... apparently, that might make it really difficult for shun to sex
(1.12 MB 768x1152 00145-653983755.png)

(991.70 KB 768x1152 00144-653983754.png)

(1.03 MB 768x1152 00147-653983757.png)

(1.07 MB 768x1152 00146-653983756.png)

>>12611 hmm, is possible, but requires a bit of prompt magic, though it might be based65 screwing me over this time
Well holy shit, it kind of works. So with this finetune test I did, I introduced non-ufotable data into the mix but made my own quality tags with the studio name so I can prompt Ufotable (or in theory any studio I add in the future) to force the style to come out. >Image one without studio tag prompted >Image two with studio "Troyca" tag prompted >Image three with "Ufotable" prompted Not sure why the non-prompted image decided to look like a 2000s hentai and overpower the big ufotable dataset since the hentai images are less than 1/3rd the size but it could be this particular combination of tags that I just yanked from my outputs folder. Will continue testing.
(1.03 MB 768x1152 00148-733171676.png)

(1.03 MB 768x1152 00150-733171678.png)

(1.03 MB 768x1152 00151-733171679.png)

(921.57 KB 768x1152 00149-733171677.png)

>>12612 I may be a bit of an idiot, literally all I needed to do to get better results was add nsfw to the prompt... because I didn't have it, for some reason now it's much better results
>>12613 first looks like satoshi urushihara
>>12614 >>12612 Now do NSFW with the loli ver
(1.04 MB 768x1152 00155-119163578.png)

(923.84 KB 768x1152 00153-119163576.png)

(1018.10 KB 768x1152 00154-119163577.png)

(1.04 MB 768x1152 00152-119163575.png)

>>12614 alright, now to offset the big booba a bit >>12616 literally was just genning up some to test it. at least I know that the flower print is more consistent than first thought
>>12617 SEX Need SEX with this little creature
(904.47 KB 768x1152 00158-852932967.png)

(905.31 KB 768x1152 00156-852932965.png)

(947.77 KB 768x1152 00157-852932966.png)

>>12617 sex isn't that bad either
(1.21 MB 1280x768 catbox_lf733a.png)

(1.44 MB 1280x768 catbox_7te79l.png)

(1.03 MB 768x1152 00163-618506077.png)

(877.56 KB 768x1152 00162-618506076.png)

(976.88 KB 768x1152 00160-618506074.png)

>>12618 here's some more for you. halo is less consistent when sex, but I don't think people will care too much, I think this is good enough, might rebake again in the future, but for now, is fine
>>12620 The funny part is that there's probably some japanese man crazy enough to do these
>>12622 If i didn't care at all about what people thought of me i would cope the nenechimobile no doubt in my mind
>>12620 guddo jobu, was on my list for a while
>>12624 i did it real quick with only 50 images so it doesnt really work that well
>>12615 yes, because one of the hentai I used is Front Innocent. Surprisingly the hentai data is mostly Bible Black which makes it even funnier. 1449 images vs 25,142 of All of Bible Black. Granted I have not filtered any of the tags on the hentai data since this was more or less a test to see how porn would work and would probably rather use art pieces of porn instead of more anime screencaps of hentai. Another set of without/with ufotable prompt.
Gotta ask: with all the new shit is there even really a reason to finetune a model?
Welp im going to bed i might play with this more tomorrow ive only tested with blood orange mix but i guess any model that can prompt cars and has a bit more of realistic merged in could work as well., You need to use "a photo of a car, from the side, aniwrap" for starters then you can describe the girl, you probably also have to additionally add "car, vehicle focus, photo background" somewhere else on the prompt. https://files.catbox.moe/ozc0du.zip
(1.36 MB 1280x768 catbox_a05rub.png)

>>12629 If this was a real car then the pussy or ass should be where you plug in for gas
>>12567 I saw it in my venv folder after using the easyscripts torch updater (still broken for me by the way), where did it come from if it doesn't exist... >>12590 While that's true for anime art, there are genuine limits to these SD-based models. Like, the vast majority of the data is going to be reasonable photos of hands, yet you still can't generate them on base SD. Another one is text, there's more than enough cogent text in the dataset but simply not enough parameters in SD to learn. Imagen can draw text with close to 8 billion parameters. Midjourney has apparently gotten better at hands which was making normalfags go apeshit but their architecture seems to be a mystery
(1.73 MB 1024x1536 catbox_fo5se6.png)

(1.02 MB 1024x1536 catbox_yagjja.png)

(2.09 MB 1024x1536 catbox_i2jdq5.png)

(2.38 MB 1152x1536 catbox_k11ypr.png)

Alright, finally finished up with making preview images and shit. Civitai can seriously fuck off though, I had a hard time getting a decent gen of shunny that wouldn't get flagged, because guess what? china dresses are inherently nsfw on civitai. hah, whatever, I got it working well enough. download link to shunny + shun on civitai: https://civitai.com/models/24046 download link to mega drive if that's more your style: https://mega.nz/folder/CEwWDADZ#qzQPU8zj7Bf_j3sp_UeiqQ/folder/yEggULhI note, I baked this at dim16, seems to get all details, but might still need some more work on it. I'm releasing this bake now both to get some feedback on it, and to see just how low I can go in dims for BA characters, which are notoriously complex. A few small notes, seems like genning at 576 x 768 works better than 512x768, even though I didn't have halter dress tagged, shun's dress learned to that token, so I suggest using it over black dress, The halo will generally retain the correct shape, but is less likely to be the correct shape when doing sex prompts on both shun and shunny, both datasets had a lot of nsfw stripped out of it when I reduced the dataset size to account for trying to bake the halo in better, so you might have to up-prompt nsfw tags or use lora (style or otherwise) that are inherently horny, I didn't test while using any style LoRA, so I'd definitely like to know if that works out for people. As a final note, thanks for letting me be a bit autistic about this and taking more time, I wanted this to be a high quality LoRA, and hopefully I succeeded.
Few quick and lazy gens with a minimal prompt for testing. Just stuff like loli, black hair, halo, shun. It seems to recognize the character but not the clothes but i'm not surprised because I haven't prompted for specific clothes yet.
(1.39 MB 840x1256 catbox_h7aie9.png)

Quick Bailu lora. Wanted to see how quick I could make a lora from scratch editing out watermarks, resizing, etc. Took about an hour total and only tested once to make sure it works but shoudl be as good as the Diona one which I also recently redid on low dim https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/fBVw2DBL
>>12634 Please make a Clara one too
>>12634 Was wondering when we'd start seeing honky rails
(486.72 KB 512x768 catbox_p751g8.png)

(1.46 MB 840x1256 catbox_ywrdiw.png)

(1.22 MB 840x1256 catbox_petjzw.png)

(1.25 MB 840x1256 catbox_xd90w2.png)

Some better example images. Also uploaded a s16xue lora for anyone who's into that
>>12633 I see, I'm just slightly surprised how well I gets her still, because I didn't really prune tags this time around, hair style differences prevented hair pruning, and for one reason or another I opted to have her animal ears be under the tag "tiger ears" which honestly could have realistically been pruned and made a permanent feature. I think at the time I had the thought of "oh let's not prune the ears so that it's easier to remove" for some reason. But yeah, clothing was not pruned at all, (for flexibility reasons), but I did put some effort into reducing the total number of tags being used for the outfits.
>>12627 Lower end finetuning like mine or hll aren't necessary but provided better attention to niches for unique generations. hll anon's model and its custom based mixes makes it better to prompt chubbas across the baord without limited to the scope of LoRAs. In terms of big base models, eventually someone will need to make a better SD base that then NAI and others can finetune into better overall models for us to play with. So yea, finetuning will still be needed, but you don't need to get in on finetuning if you don't plan on hoarding hundreds of thousands of images nor have 24 GB card(s) on hand.
(1.42 MB 840x1256 catbox_5ckgxp.png)

(1.27 MB 840x1256 catbox_r2zgbv.png)

I just like the lolis
>>12642 It just faps
Did civitai just delete a fuckton of preview images?
Found a folded lora in civitai and it REALLY does not play nice with other character loras. It's decent when used alone but it's far worse than ye olde full nelson loras to actually get the character in the pose without body horror
>>12646 https://civitai.com/models/18512/folded This one for reference. I love the position but this lora doesn't play nice with anything at all. Barely even plays nice with base models to be honest.
>>12648 didnt mean to quote that
>>12634 thank you bro Now I can proudly say my first star rail fap was bailu
I can safely say it does porn better than before, especially since not all the porn images were tagged correctly, Getting some niche stuff better than base NAI too which is a plus. Going to heavily edit and prune down the hentai data and slowly replace the unneeded images with booru porn of anything that I feel is missing. If the next training goes well, I'll share it here and you guys can test it out for any problems and such.
>>12528 >>12529 >>12531 What's that code part, and presumably if I just save the venv somewhere else in case of a mess up I can go back? And I have to change the start up.bat file to remove xformers from it or leave as is?
>>12578 >>12580 >Why the fuck does the AI keep doing these shirt open poses on everything. >Why does AI keep doing this shit or bad hands >Notice all my favorite artists do the same thing It's actually hilarious but really ruins / opens your eyes.
>>12646 Wow... it really does ruin character lora... shunny looks so generic, sadness
>>12632 I saw you used based65, honestly I'm not happy with it after seeing how it looses some LORA details compared to 64 and I wonder if the reason for that is due to the two finetuned models I put into the final-mix, wouldn't make sense but the only fintuned model that 64 had was hll3. I'll see what I can do with based66 with hll4-beta and see if it does better with retaining LORAs details compered to 65
>>12657 Someone on /vtai/ mentioned that hll4 uses a differently packed version of NAI-full and that it’s screwing up add differences. Not sure if that is relevant to your method. Hopefully hll anon chimes in here again to give more direct info.
Inpaint gods how do I get super interesting backgrounds, tried inpainting the backgrounds themselves and re-running prompt with high denoise but it somehow makes it worse. I just don't know how this magic tool works.
I don't know why the fuck this worked, especially on my finetune, but I'm gonna share it masterpiece, best quality), highres, 4k, 8k detailed, intricate, (monocolor, flat color, (blue:1.3) greyscale:1.2), realistic (large breasts:1.2), wide hips, [[closeup]] (lingerie), (beautiful, detailed eyes, eyeliner, eyelashes.:1.05) (wreath garland background:1.4) intricate (high detail:1.1) body, skindentation, (shiny skin:1.05) Negative prompt: (low quality, worst quality:1.4) Steps: 20, Sampler: DDIM, CFG scale: 7.5, Seed: 3620198121, Size: 512x752, Model hash: b3e1b725e3, Model: ExpMixLine_v2, Denoising strength: 0.31, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B yanked this from /g/, my results above are with my latest ufotable finetune instead of the model in the prompt, as well as some tag clean up.
>>12660 What even is finetuning
Not sure if anyone knew this, but every time you complete a 75 token set or use break to start a new set, you lose your quality tags and you need to reprompt them.
I saw the recent BA LORA was made with the text encoder being LR 1e-5, that's interesting but I did remember awhile back it was recommended to have the text LR and unet LR being the same while the exponent of the text LR is just 1+
>>12659 backgrounds are 95% unintended consequences of your prompt and 5% outdoors, scenery, soft lighting, forest, river, golden hour if you are trying to engineer good backgrounds, there's an addon that can generate the subject and background separately based on depth detection
>>12617 Catbox please.
>>12657 I did use based65 because i really like it as a model, but I did notice that it does have that issue. To be honest though, i didnt actually think to test on different models, probably would have reduced the total number of bakes if I did. The style made it less of an issue, though, I did also notice that it has worse hands vs based64, which is the only real gripe I have with it.
>>12665 I'm on my way home right now, I'll catbox it when I get to my computer
(1.41 MB 1152x960 catbox_jwc997.png)

(1.35 MB 1152x960 catbox_d5dwlb.png)

(1.41 MB 1152x960 catbox_09vtsk.png)

(1.38 MB 1152x960 catbox_55wjm6.png)

I finished my Shinobu multi-concept LoRA, it has 9 variants of her in total. It actually works surprisingly well; the only concept that has trouble is Kizu's white dress version and that's probably because it shares a lot of tags with the TV white dress version. Full instructions and download: https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/i0ZWwbLZ
(1.70 MB 1152x960 catbox_hz1lly.png)

(1.61 MB 1152x960 catbox_24t0wu.png)

(1.41 MB 1152x960 catbox_amxzep.png)

(1.40 MB 1152x960 catbox_ks8r4q.png)

>>12668 The rest of the previews (excepting TV white dress because it's the most basic version lolmao)
(1.60 MB 1072x1600 catbox_z9tarq.png)

(1.87 MB 1280x1024 catbox_g15p3d.png)

(6.23 MB 2560x3072 catbox_c1sxxz.png)

(1.37 MB 896x1344 catbox_iqhw8x.png)

>find a couple loras that don't work >end up having to reinstall the whole thing boy i love downloading pytorch
(1.78 MB 1072x1600 catbox_no87ou.png)

(1.68 MB 1024x1408 catbox_i7rxn5.png)

(1.75 MB 1024x1408 catbox_tgxtpq.png)

(2.39 MB 1072x1600 catbox_3bsbdx.png)

sorry if i posted any of these before, it's hard to keep track
(1.66 MB 1072x1600 catbox_tvw61e.png)

>now it's pitching errors with LoRAs that i know used to work why do i ever update anything god damn
>>12672 what? The webui is giving errors with LORAs now?
>>12673 the built in lora plugin seems fucked at the moment, switching back to additional networks everything works fine
>>12674 specifically, i hit this error: https://github.com/d8ahazard/sd_dreambooth_extension/issues/962 on a Hitokomoru .pt lora and every lora i tried after that started hitting the same error even ones that have worked for weeks
Lots of peeps having issues since the new commits, would hold off git pulling until they fix their shit
>>12677 and to believe that updating pytorch and torchvision yourself doesn't break anything compared to Voldy doing shit holy kek
>>12678 Reminds me I should do taht today before I get, gonna make a backup of the venv in case I break it though.
>>12679 yeah no need to worry the only error you might get is if you do 2.1.0 and have no triton installed and for some weird reason it's hard to get xformers working with that version of pytorch and 2.0.0 but after just doing --xformers after running --force-enable-xformers first it just ended up working
>peoples shit is breaking everywhere This is why I let people beta test first before I pull, "thanks for beta testing" etc. >>12645 they updated the setting to turn off NSFW content on default, check your settings.
fwiw I have no issues on the latest commit with torch 2.0.
New TI embedding tech from Google. https://prompt-plus.github.io/ https://github.com/cloneofsimo/promptplusplus The paper directly states it was inspired by the work of LoRA for diffusion models. It applies a similar concept that LoRA does where now the weights are applied differently for each layer of the UNet, similar to how LoRA models function. From what I gather reading the paper, it means we could have the quality of LoRA with 100x smaller filesizes, even faster training, and less cross contamination with concepts (see section 4.2 from the paper).
>>12683 I'll wait till we have more people work on it because Cloneofsimo's initial LORAs examples looked like shit, just gonna wait till we have more ML majors work out with this implementation
(1.82 MB 1024x1536 catbox_7wf89l.png)

(1.92 MB 1024x1536 catbox_jeo84p.png)

(2.03 MB 1024x1536 catbox_9ab9di.png)

(1.82 MB 1024x1536 catbox_m1gnqo.png)

>>12676 ended up just genning more shunny, wasn't planning to, but I did
>pixiv going through with their ban on all "indecency" artwork it's not just loli but holy shit the credit card companies are ran by jews and some new world shit order, I want to go back bros...
(1.89 MB 1024x1536 catbox_sbm8no.png)

(1.69 MB 1024x1536 catbox_b9nqwg.png)

(1.84 MB 1024x1536 catbox_gmkb6m.png)

>>12686 and some sex, because shun apparently is too much for me to let be
>ui now looks like utter garbage >dropdowns turned into html+js bullshit instead of just using a fucking option element like it was before >doesn't let you type to go to option anymore because its a fucking div with elements inside Gradio was a mistake Don't update UI, and if you did the mistake of updating, rollback to a9fed7c3
(2.16 MB 1536x1024 00424-3934000324.png)

(2.04 MB 1536x1024 00433-3483721082.png)

(2.57 MB 1536x1024 00375-1713842412.png)

(2.50 MB 1536x1024 00419-2557563328.png)

Fantasy bunnies
>>12689 what was even the reason for this new commit to upgrade Gradio? Did it initially have benefits like upgrading torch?
How do I upgrade my torch to 2.0?
>Keep getting too spooked to u pgrade to new torch This won't brick my ability to make loras right with sd-scripts / ps1 scripts? About my only hesitation is to troubleshoot fixing that myself.
>>12691 probably just voldy being voldy and breaking shit as usual
>>12693 launch.py file, make sure that line 240 >torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==2.0.0+cu118 torchvision==0.15.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118") launch the webui without any commands just have it plain, and then after that add all of your old commands but replace xformers with --opt-sdp-attention >>12694 No it won't brick your ability to make LORAs
>>12696 oh yeah backup your venv or just make sure you do >venv now becomes venv2 and then running the webui-user.bat will just make a new venv folder with pytorch 2.0
I forgot how annoying working with a LORA with less than 50 images is
>>12691 don't get why he still hasn't made a dev branch or semver'd the code so people don't have to put up with this shit on a weekly basis
>>12696 I did not backup the fucking arg commandline, what does that look like again? arg commandnline medvram and some other shit?
>>12699 unironically ruski stubbornness, still remember the fight he had with the comfyUI dev on /g/ kek
>>12700 uh yeah >--medvram --lowvram for the vram options
>>12702 I mean what does that entire line look like in the webuser.bat file I deleted mine.
(1.42 MB 1024x1280 00003-1480143465.png)

flat
>>12703 @echo off set SAFETENSORS_FAST_GPU=1 set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS= *whatever command you had here like --medvram* don't forget to add --opt-sdp-attention too call webui.bat
>>12705 I set those and now it won't open. It opened before with the new venv but now it won't even open when I closed it to put those in the commandline.
>>12706 >>12705 holy shit nevermind I'm profoundly retarded. I forgot the call.webui.bat when I was copying it over. Though now I'm getting >launch.py: error: unrecognized arguemnts: too
>>12707 did you copy --opt-sdp-attention too verbatim
>>12707 >>12706 I'm actually below 10 iq. I fixed it.
>>12708 You don't include xformers right? Hard to tell if it's faster but it's at least same speed without xformers.
>>12710 I also lost the ability to upscale with sd ultimate at the higher resolution I was using. Think I'm going back if I can.
(2.65 MB 1920x1280 catbox_xozyzo.png)

(2.72 MB 1920x1280 catbox_qb86yu.png)

(2.63 MB 1920x1280 catbox_uswc6k.png)

(2.58 MB 1920x1280 catbox_m3u4p0.png)

>>12690 "Fantasy" bunnies huh? fine, I raise you "Sci-Fi" Hifumi >>12712 shunny really is cute, I almost didn't stop genning her in the end
>>12696 for me torch 2.0 is about 15% faster than xformers in the first generation about about the same speed in hires fix, 2080 ti no noticeable change in quality
>>12701 This happened? I thought voldy stopped visiting the threads for a long time now.
>>12715 this was when comfyUI was just released I forgot the full context but I think it involved some code Voldy did that the comfydev didn't like and wouldn't change
>>12716 I see. I did a quick search of the archives for it but couldn't find anything, not like I'm that interested in reading the drama anyway. Really fascinating how both devs of the most popular UIs are such interesting characters, I ain't gonna complain though when they've provided all this for free and it just werks.
>reddit but apparently the writer of the LORA papers is doing a reddit AMA https://old.reddit.com/r/StableDiffusion/comments/1223y27/im_the_creator_of_lora_how_can_i_make_it_better/
(2.68 MB 1416x1416 38511-3967232360.png)

>>12668 very good
Do concept loras just need way more steps then a character lora to work properly? Been trying to make this one that does a body change and I'm at 5k steps and have to force it at weight 1.4 to see anything that I want. I wasn't sure if I'm not giving it enough cook time or my learning rate is scuffed or what
>>12723 post your script and folder set up
>>12683 Interesting. By their own admission fidelity is barely better than regular TI (Figure 8), with LoRA version of fine-tuning conspicuously missing but probably falling between TI and DreamBooth. They imply XTI merging is easier but don't have numbers to back that up. If XTI's fast enough to train it could be a good compromise, maybe kill small dim LoRA kek Can't wait for schizos to shout down experimentation with this. Although it's actually been good for a few days. >>12687 Curious why they backed off for a few months then came back to it. >>12694 It borked for me but had no problem rolling back to torch 1.12 until I find more time to tear my hair out over it.
>>12637 >s16xue lora YOU MAGNIFICENT GLORIOUS STUPENDOUS MOTHERFUCKER
I hate PC guides on youtube and other sites, these faggots aren't really thinking about future proofing literally if you buy a bigger case and 1200w or more PSU you're pretty much future proofed for the longest time
>>12728 literally the guy that I bought my 2nd 3090 from sold it because he moved and got a micro case, he couldn't fit it anymore, poor sucker but he was a normie so... blissful ignorance is a blessing in the used market
>>12728 we do a little futureproofing (my money is gone)
>>12728 Agreed, I've had a Corsair AX1200i since 2016(?) and never had to replace it even now that I bought a 4090.
>>12730 >was thinking of doing AM5 >SLI nvlink support is dead for consumer motherboards >stick with AM4 pain
>>12730 I literally just built a computer with the same internals for a friend, just less drives and a different card kek.
>>12732 Wait, you can't nvlink on AM5?
>>12732 unless the new motherboards for intel end up having SLI support (which I doubt) I'm going to have to stick with last gen processors kek, ancient hardware in PC years for a dead art >>12734 motherboards have to support it, the companies have to get a license from NVIDIA to have native support on it. Only the MSI godlike has mult-gpu support but that's for AMD crossfire and it's a 1k motherboard
>>12735 well fuck, goodbye dual 3090s
>>12736 just get a 4090 too, that's what I was planning to do if I couldn't find the same 3090 for a good price like the first one I bought but I got "unlucky" and now I have to stick with last gen CPUs while looking for another motherboard because my current mb didn't have SLI support kek, I guess motherboards can be future proofing too but only for autists
>>12736 You can still do multi-GPU training iirc, you just won't get the rest of the performance boost.
>>12738 even with current cpus and motherboards I think he'll get 8x8 gen 4 pcie with his two gpus, the only time I have seen multi gpus run on 16x was for threadripper setups
>>12739 Fucking RIP. Time to invest in a H100 set up
(121.95 KB 649x311 msedge_vrSvOg8qQv.png)

>>12737 >get a 4090 too why do you think i got a used 3090? >>12738 I'll ask anus if the x670e-e supports nvlink or I'll just get another 3090 and keep both for a new build with a used 5950X
>>12741 >kike prices for euros too to believe the 4090 is only 200 or 300 dollars less with my market place, I blame scalpers but mostly nvidia for this bullshit. Still had the audacity of normies telling me that having a 3090 isn't worth it when VRAM is important for a lot of shit especially when newer gen shit games are using a lot more VRAM up with their best settings given most PC ports are coming from the PS5/Series X where their GPUs are 16gbs/13gbs of VRAM
>>12741 I did a quick search and got mixed messages but the consensus seems to lean that it is not supported on X670.
>>12743 SLI/Nvlink is dead for all future GPUs and the thought of bringing it for the 4090ti is a circuit breaker mess, so yeah any new motherboard that is consumer based isn't going to have it because the manufactures aren't going to find the need to pay NVIDIA for the licenses
>>12742 >meant 4080 by the way, the 4090s in my place are still shit prices like the Euros
>>12742 4090 MSRP is supposed to be close to 1900€ here but most go for 2400 to 2600€ Normies will unironically tell you that faster vram > more vram even though it's been proven time and time again that this literally never matters on anything but xx10/20/30/50 budget cards
>>12745 4080s are 1400 to 1600€ here, mostly in the high 1500s
>>12742 I just took a look at newegg and the prices seem to be back down to earth near MSRP. The ASUS cards are still $2000+ for some reason but MSI cards are starting around $1600 new. I don't think these prices will last long.
>>12747 1800 here for the 4080, the "4070ti" which is just the 12gb 4080 is cheaper and closer to the msrp 1k-1300 but >>12748 I think so as well, it's just better to wait for the 4090 and get a used 3090 for now if you're serious with the AI autism. You'll save a lot more money compared to buying one of the other lame 4000 series cards.
>>12749 My plan was to get the 3090 for now and upgrade to the 4090/ti when the 5090 comes out (and keep the 3090 for a 5950X AI rig and optionally get another one and nvlink it)
>>12750 yeah seems like a great idea, future proofing a lot for "non server" based AI stuff, if we want to train our own models like waifudiffusion or NAI that's a whole other monster that is possible if you're a rich god. However I do believe you can run the leaked lama facebook chat AI models on your PC with 24gbs of VRAM? I don't really care for text AI but it's a cool thing to have access to.
>>12742 I remember thinking my old 1070 being a bit pricey when I bought it in what feels like ages ago now, I can't say that I expected prices to get this bad. >Araragi in the back He's been showing up in my gens lately, leave my wives alone you. You have enough cunny already.
>>12749 When I bought my 4090, 3090s all over the place exploded to match the 4090 in price so I said fuck it and bought the cheapest one (MSI Suprim at MSRP). I'm still pretty happy with it, but as my autism gets more insane, the only way I'm gonna improve my hardware is looking at A400/H100 set ups and those are gonna take a while. I read that the WD trainer has 3 A6000s and still needed to rent out a cluster to train the first two epochs. It's getting kind of stupid.
>>12751 my original plan was to nvlink it on this right but uh.. yeah turns out i did 0 research on anything am5/x670/ddr5 But hey at least the 5950x and 3090 (i'll get an identical one asap if i can snatch it at the same price) will be pretty cheap by the time the 5090 comes out
>>12754 on this rig* brain farted
>>12754 Yeah that's what I plan to do, just upgrade my motherboard with an NVLINK supported one and run my two 3090s with the nvlink bridge until the 5090 comes out in 2025 or 2026 and lowers the prices of the 4090. Honestly I think by the time the 5090 comes out PCI gen 5 slots will ACTUALLY matter so future proof yourself with your motherboards if you actually plan on getting a 5090 for 3000 dollars kek
>>12756 Don't think I'll ever upgrade the 3090s to something else unless consumer (so no titans) cards get 48gb of vram, the used 4090 will be just for gaming
>>12757 >gaming I barely game anymore I just do gacha so I'm fine there, but yeah I really hope the Adalovelace Titan didn't get shelved but if it did there goes a dream for AI trainerbros.
>>12758 pretty sure that card got canned already. No 48GB VRAM 40 series card.
>>12759 >shit pricing even without accounting for scalpers >shit availability >nothing but the highest end beats out the previous gen high end by more than 10% What a shit series. I swear they have a good series once every 4 years. Remember when the 3070 matched the 2080 ti despite having 3 gigs of vram LESS?
>>12760 the 3000 series was great but the thing that ruined it was fake cryptobros and faggots promoting mining like big youtube tech channels that /g/ hates the most. Luckily they got what they deserved and their market crashed so we're getting used GPUs for really great prices. New ones still have garbage prices but those are scalpers desu
>>12761 ETH fags on both ends got what they deserved. Miners got cucked by their overwhelming (chink) greed, and the PoSers got a wake up call that preminers and pre-investors have them by the balls and own most of the masternodes so they will never get their interest gains. Crypto market will bounce back, but all these coomsumer miners who were reckless got their balls stomped into jelly and should've just spent that money on the actual crypto and sold at the highs of November 2021. As for the new GPUs, NVidia is gonna jew everyone until the bitter end, pre-covid they created artificial shortages of the product, blaming crypto miners to justify cranking up prices and then still selling mass wholesale of cards to the chinese, and then during covid they slashed their work force to increase their profit. They can probably last another handful of years before they need to start dropping prices down to earth, and that is assuming the OpenAI fucks don't somehow convince governments to start regulating the sale of GPUs for consumers.
>>12761 The only shit value offering the 3000 series has/had is the 3090 Ti vs regular 3090 without even accounting the 4090 drop 6 months later 100-150W and at least 500 bucks more for literally 10 FPS more at best despite the higher clock and more cores. What the fuck happened there?
>>12762 >OpenAI fucks don't somehow convince governments to start regulating the sale of GPUs for consumers. I forgot that is also possible aside from some fuckery that would regulate AI usage with consumers >>12763 That was the first step of NVIDIA showing the jew scams they would do for the 4000 series desu
>>12764 I like to imagine nvidia's internal struggled as epic battles of gamers vs jews Gamers won the 10 series battle, lost the 20 series battle and won the 30 series battle (at the cost of getting infected with jew genes and releasing the 3090 ti)
>>12718 can anyone with a leddit account suggest combining lora training with an ensemble of expert denoisers.
>>12724 It's the normal ps1 script probably floating around in the repo, folder structure is A Concept 8 steps - 120 images B Concept 4 Steps - 110 images C Concept 4 Steps - 25 images I think the C concept is fucking with me more then helping so I'll probably drop it but before was doing 5e-5 and at like 5k steps I can get it functional to an extent if I (x:1.3) it. Wonder if I drop C concept and make those 10:5 steps. I just don't know what kind of shit I need for concepts, charas are pretty easy or I've got a method for it. Latest one that seems kinda functional is LR - 5e-5 unet - 1.4e-4 text - 1.5e-5
>>12766 Thats a good idea, but I'm pretty sure that would only work if the denoisers were trained like so in the base model.
So SLI is a pain, most 3 slot bridges are gone or scalped meanwhile the only good deal I had for a 4 slot bridge only works with 2 motherboards that have AM4 sockets >MSI X570 Godlike >EVGA X570 Dark I just got myself into some brain hurting shit for a dead art
>>12767 Question for myself: Wasn't LR overwritten by Unet which is why it did not matter what you put there? Thought someone said something like that
>>12769 I remember this one denoiser paper that trained an initial denoiser, then copied it and trained one copy for the initial steps and one for final steps. They then repeated this process and ended with with a bunch of denoisers. I think using Loras to approximate this process would work.
>>12770 I contacted anus' support about nvlink on x670e-e, will get back to me within 2 days Why not just buy used btw? You're probably not gonna be able to power 3+ 3090s on anything but a 2000W PSU (3090s use nearly 400W so do the maths, 1600W for 3 is the minimum) anyway unless you heavily power limit them, if you're building a new rig altogether just get used shit for now, 2 3090s will do and even if you're extremely autistic about this I'd say 2 are the limit, if you need something more it's probably worth it to just use some paperspace or jewgle cloud tier with 2-4 80GB A100s
>>12773 yeah that's the thing though the nvlink bridges that are 4 slot are the cheapest while the 3 slot ones which the average AM4 motherboard has for a good price new or used are easy to obtain while the only existing 4 slot bridge spacing motherboards for NVlink are scalped used or over 1k for new
>>12774 oh yeah meant 3 slot NVlink bridges are the most expensive or scalped new or used in my place, found one guy selling it used for less than what I saw on other sites gonna haggle him and see if I can get it cheaper
>>12775 like I managed to find 2 sli supported 3 slot bridge motherboards for less than 300 new or used, it's just a pain my plan to save more with the 4 slot bridge NVlink is pretty much dead, no wonder why SLI died kek
>>12775 Like I've said, you're most likely not even gonna be able to power four of them if you can even get 4 identical cards. Just get two and rent something for a few bucks/hour if needed. I get that you're super-excited at the proposition of 72/96gb of vram but I can guarantee you that you're gonna regret it if you manage to get it working. Upfront cost of all 4 cards + 2000W+ PSU + electricity bill are gonna be insane. Not to mention you're gonna need an open bench setup unless you want them to melt inside a case or plan to get watercooling kits for all of them. Seriously, if you need more than 2 just rent some A100s or something.
>>12777 no not 4 3090s it's just 2, you have to work a lot into looking for motherboards that are consumer grade which have 4 slot bridge spacing between the 16x PCI gen 4 slots just to use the 4 slot NVlink that NVIDIA provided.
>>12778 most motherboards that aren't a headache to buy which are SLI supported have 3 slot spacing between the 16x pci gen 4 slots
>>12778 >>12779 Ah my bad, somehow forgot about the cards' thickness
>>12780 yeah the EVGA Dark is the only am4 motherboard that I can find with 4 slot spacing that aren't over 1k meanwhile intel has some deals I can find with 4 slot spacing between the 16x slot but that would just be spending even more money kek
>>12781 Can't you just get the cheapest good option and stop caring about spacing? PCIE 4.0 x16 risers/extenders are a thing
>>12782 I know but I would have rather get the best deal without having to use risers desu, oh well if the guy that is selling the 3 slot spaced nvlink can be haggled I'll buy it but for now my thought process is to wait to just upgrade to intel and get one of their 4 slot spaced pci 16x gen 4 slots in the future. Just 24 gbs of VRAM with one 3090 has made my training a lot more easier and I can still use two without NVLink, I'll just wait and see tbh.
>>12783 but that's if I can't get a deal with the 3 slot space nvlink bridge, just checking my options for now.
>>12783 >upgrade to intel dude, no
>>12785 I'm basically stuck with AM4 for SLI support unless Asus gets back to the anon that contacted them regarding SLI support on the x670e-e
>>12786 it's been me all along, hi 5950X is still a beast and I don't think it's gonna be worth it to spend up to 1000 bucks for nvlink support when most good AM5 boards are over 500 bucks to begin with (also apparently assrock and gigashit ones are really awful this time around, surprisingly enough anus ones seem to be good)
>>12787 Yeah I'm just going to hope on the guy I am haggling to give me a good deal on the 3 slot bridge to save more money on the AM4 motherboard I got a good deal on but I'll see, hopefully AM5 is making gamers too busy to look for AM4 boards but I doubt, everyone wants to save as much as I do right now
>>12788 If you can't get a 3-4 slot bridge then just get a riser/extender, it's not that big of a deal On another note all this slot talk made me realize that I might need a riser/extender myself, I wanted to get an rx590 for hackintosh purposes but it might not fit with the 3090 installed
apparently the chinaman that made AnyV3 made a butthurt tanty on civiitai, called out Any4 as a fake (which it is) and is now showing off a test for v5. copya pasta from /g/ >comment: https://civitai.com/models/66/anything-v3?commentId=72697&modal=commentThread >model: https://civitai.com/models/9409
>>12790 >click civitai link >fucking trigger warnings All zoomers and millennials should've been subject to at least 6 hours of liveleak a day
>>12791 At least I can filter out the fag shit if I bother to go back for a while. Their website runs like such ass for me when I scroll more then a page it starts to lag like crazy. There are some decent concept style ones on there for all my bashing so I like to look but their shit s ite makes it hard to do.
I tried sharing the webui with gradio.live and it was buggy as fuck and kept breaking either needing a reload or a complete SD restart Has this happened to anyone else?
>>12790 lol civitai was blocked in china man really went around the great firewall for civitai of all places
>>12794 Man that e-fame must be serious business
>>12793 The remote links? Yea they are fucking garbage.
>>12796 Yeah those. Shit just breaks after 1-2 gens, sometimes crashing SD itself
>>12797 For me its been a matter of how far away the person I am sharing the link with is. If they are far away, even in the same city, it will break after a couple gens and then person needs to restart their browser. They also won't get a working preview window. If I am remoting at home, it works fine but there is a 10 second or so delay and it will not last the full 3 days or however long they say the link lasts for.
(1.05 MB 768x1152 catbox_hv11h4.png)

(1.10 MB 768x1152 catbox_apvvxi.png)

(1.49 MB 960x1152 catbox_d3xxnq.png)

(1.29 MB 1152x960 catbox_ix0jlb.png)

I added an updated version of my Shinobu LoRA to MEGA, it should be capable of doing teen and white dress kizu versions easily now. Readme has a prompt explanation.
>>12799 Any reason to still use the original one?
So it looks like the story that AnyV3 was just a block merge and not a finetune turned out to be true.
(1.73 MB 960x1248 catbox_ulu39n.png)

(1.34 MB 1152x960 catbox_vwbyna.png)

(1.20 MB 1152x960 catbox_u23xrb.png)

>>12800 It should be an all-around improvement so no, I just left it there because I'm lazy and haven't regenerated the preview images on the new version yet.
>>12802 great, thank you also hi fishine anon some other anon above made a s16xue lora but i can't get anything good out of it even after an hour of wrangling (skill issue) and it's really inflexible even when feeding it character loras, can you do your usual magic if i provide a dataset?
(1.69 MB 960x1152 catbox_1wvz0g.png)

>>12803 sure, toss me the data and I'll see what I can do
>>12804 alright, thanks off to gathering the dataset then
>sd/g/ and /h/dg are fucking both simultaneously huge trashfires right now I fucking hate that place now, I'm certain now an /ai/ board would not fix anything.
>>12804 >>12805 do you want me to edit out the signature btw?
Comparing the two possummachine loras, this new one seems to be more accurate, but it's adhering to the prompt less, making her eyes brown like her hair, instead of gray as intended. Even with cutoff enabled and set to a weight of 2, there's still color leak.
>>12807 if they're present in many images it would be good to at least remove easy ones like flat backgrounds, it's up to you how much you care about the signature randomly popping up
>Civitai complied with a DMCA take down order by Square Enix over Final Fantasy Loras fucking kek
>>12810 they were probably shit but i never saved any tifa ones and never checked for a fang lora FUCK
>>12809 Backgrounds are all gradients but I'll try my best
>>12810 Makes me realize, Glaze probably wouldn't work too well for screenshots of realtime 3D content
>>12813 Glaze is suppose to another variation of those anti-AI protections right?
>>12803 The y'shtola one is on mega so i couldn't care less
>>12815 kek I love this back and forth war so much
>>12803 >>12804 Yeah wasn't 100% happy with the s16xue one, I was able to quickly put the dataset together but my training settings definitely aren't optimal for this one in particular. Here's the dataset if anyone else wants to give it a shot. All the signatures are edited out and images resized so they aren't gigantic res. I would assume that higher dim is probably needed for this one https://files.catbox.moe/88ymcs.zip
>>12819 Actually I did also train a 32dim version to test gradient checkpointing at 1024px and it turned out better if you want to give it a try https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/rB1AyZTI https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/rB1AyZTI
>>12819 I've gathered all the good pics already and I'm already removing the signatures It was a good attempt tho, don't worry
>>12816 >The y'shtola one what? never seen it
(4.36 MB 3360x1460 catbox_a910sa.png)

(4.29 MB 3360x1460 catbox_nxuf74.png)

>>12808 I'm not sure of the best way to fix the color bleed, I do see a similar effect like how it's turning Diona's hair purple in this example. I do have alternate versions of the lora in the extras folder but the style doesn't carry over as much
(1.32 MB 848x1152 catbox_kdvbrn.png)

(1.52 MB 848x1152 catbox_7etu4n.png)

(1.35 MB 848x1152 catbox_79x05u.png)

(1.38 MB 848x1152 catbox_j1cg0d.png)

>>12802 Nice prompt
(1.42 MB 840x1256 catbox_0kd7hd.png)

(1.40 MB 840x1256 catbox_hqgky9.png)

(1.47 MB 840x1256 catbox_dvez3x.png)

(1.42 MB 840x1256 catbox_19hdym.png)

Liking the Shinobu lora but I'm still not sold on multi concept loras. It's way too easy to get completely different outfits from even just the same seed and changing around the order of the prompt.
(1.76 MB 1152x1152 catbox_npqu5r.png)

(1.96 MB 1152x1152 catbox_oj8x8z.png)

(1.13 MB 768x1152 catbox_hxu76g.png)

>>12826 It's mainly a proof of concept "she has like 20 canon variants how many can I pack in one LoRA with acceptable quality". I might make higher quality individual versions of her two loli Kizu forms and student Shino since I like those three a lot.
(1.52 MB 1152x1152 catbox_tqeki8.png)

(1.70 MB 1152x1152 catbox_eyxl1g.png)

(1.55 MB 1152x1152 catbox_tsn5kb.png)

this is a good position
(1.51 MB 1024x1536 00032-752712311.png)

(1.63 MB 1024x1536 00043-2857269232.png)

(1.71 MB 1024x1536 00023-3893186167.png)

(1.80 MB 1024x1536 00065-881822639.png)

(1.00 MB 768x1152 00065-1645350497.png)

(896.91 KB 768x1152 00067-1645350499.png)

(1.01 MB 768x1152 00070-965102598.png)

(1.15 MB 768x1152 00074-965102602.png)

i am addicted to wide hips pls send help
(2.52 MB 3456x4224 68032.jpg)

zeldabros ww@
>>12831 nice. catbox?
(1.17 MB 840x1256 catbox_e8840s.png)

>>12830 Thanks, that's really pretty.
(1.37 MB 840x1256 catbox_bb8skg.png)

Classic
Do any of you have any artstyle Locons you really like? I might put some into based66
>>12837 I like the Musouduki lora and Yinpa ones because they generate more loli-like girls than base can generate but I wouldn't put them in a model. My opinion is that an artist style lora would remove flexibility from the model and would look bad if you want to add another artist style lora on top of it.
>>12837 i just made two and i think they came out okay tho the misawa hiroshi one coul've probably done better if i had pruned "painting \(medium\), traditional media, watercolor \(medium\)" since it kinda defeats the point if you need to invoke it, the other one was a redo of the mimonel one i did as a lora that works more or less the same.
>>12838 I see that makes sense, I was seeing if I can add locons to the mix given there's that option now, I'll just use the models that I have planned for the recipe for now
i was supposed to be cleaning up the s16xue dataset but i accidentally started playing mystia's izakaya and chen is gonna break my kneecaps if i don't pay up
>>12833 no catbox, keep to myself mostly, here is my messy ass prompt: https://files.catbox.moe/69zgzw.txt started out as a post-gangbang scene but once i started getting girls worth sharing i edited it down since the dicks are always subtly fucked up. i gen local at 512x768 and hiresfix 1.5x with denoise 0.6 and 25 hires steps. 6800xt on linux
>>12810 lmfao, this is the funniest takedown I've seen Squeenix do since Oto Wakka.
>>12810 meh. I'm working on a SFW-focused lora for lalafells, none existed anyway, and i couldn't get a good lala out of any model i tried. doubt it'll have much interest here but i'll share it anyway when it's done. Still on the booru scraping phase.
(395.67 KB 1888x1416 catbox_tchhmk.jpg)

(183.50 KB 944x1416 catbox_8qcuw7.jpg)

(232.12 KB 944x1416 catbox_owtfqx.jpg)

some watercolors with breasts of borderline size for 4chin. it's a nice style. the anime lineart lora on civitai at low levels helps keep the coloring light.
>>12846 catbox? i'm gonna use this as one of the preview images lol
>>12847 >>12846 i'm also surprised you got it to do a choco waifu, considering the dataset, but i guess i never tested if it could do them. i'm assuming the weight is low?
>>12830 >>12846 Damn, those lips are great, I'll have to try this one out in a bit.
(2.18 MB 1024x1536 catbox_xd4qj5.png)

(2.11 MB 1024x1536 catbox_jyoirz.png)

(2.07 MB 1024x1536 catbox_jg4rvv.png)

(2.18 MB 1024x1536 catbox_5tacb3.png)

>>12847 Guess I'll drop a few more gens while I'm at it, time to try preggo: https://files.catbox.moe/kkz250.png
>>12850 Oh crap I forgot to remove 0.3 zankuro from the prompt while testing the LoRA.
(2.26 MB 1024x1536 00015-371671987.png)

(2.17 MB 1024x1536 00017-371671980.png)

>>12851 i just regenned em without it. thanks
>>12852 Very pretty but a tiny bit of Zankuro really is some kind of secret sauce.
So apparently RE4 remake uses more than 13gbs of VRAM when it's on full settings, holy kek I didn't know 12gbs users would be fucked over so quickly
(202.01 KB 490x389 NMS_PtPpVCqzmH.png)

>>12854 amateurs, nms has been using 8+ gbs of vram from BEFORE 8gb+ gpus were released (titans don't count)
(1.78 MB 1024x1816 catbox_m72yab.png)

>>12831 you have not yet begun to wide
>>12855 honestly I feel like ever since the vram of the current gen game consoles are 13-16gbs PC ports just don't care about optimizing for ultra settings on all the resolutions kek
>>12857 good LoDs alone could go a long way but nope
(2.26 MB 1024x1536 catbox_zzlujx.png)

(2.26 MB 1024x1536 catbox_w2fv29.png)

(2.20 MB 1024x1536 catbox_73h0pe.png)

(2.18 MB 1024x1536 catbox_4fqnpt.png)

K a few more gens, that gyaru randomly showed up again but with blue eyes this time. I swear the preggo phase will soon pass.
>>12859 >I swear the preggo phase will soon pass. no
(2.04 MB 1024x1536 catbox_g43s3s.png)

(1.55 MB 1024x1536 catbox_dzf06p.png)

>>12860 You are probably right...
do you guys use upscalers or denoisers for your training images? If so which ones?
>>12859 >I swear the preggo phase will soon pass. It doesn't. It's just too satisfying to add stomach bulge or pregnancy to most of your gens
>>12862 I haven't trained a LoRA which required a denoiser because I've been lucky with having high quality datasets. Your best bet if you have a good chunk of shitty data that could maybe work is to run it through 4x animesharp and then use the python denoiser that keeps getting posted around.
>>12862 I find it's good to run compressed images through the de-jpeg esrgan filter or upscaling and re-downscaling to get rid of artifacts.
>>12865 >de-jpeg esrgan filter I don't know what that is >>12864 >use the python denoiser you mean the one that nuked that glaze shit?
>>12837 TorinoAqua
>Caught one of those annoying rangebans on /h/ >Fiddle with phone a bit >Actually managed to evade it Huh, I didn't know I could do that. I'd love to continue where I left but I've learned that getting into autistic slapfights with mods is a waste of time and effort. No idea how some people can actually engage in that for years.
>>12866 I posted about it last thread >>11377 I also remembered that you can just load the jpeg remover model into the models/ESRGAN folder in webui and run it from the extras tab. Good if you just want to try it out and saves the steps of using a script or downloading a separate GUI. Though I would recommend either for batches
Talking about scripts and stuff, what happened to that one anon that was going to frankenstein a 4chanx-like script for this site? Was suddenly reminded of it
>>12869 thanks a lot anon, gonna test it out
>>12870 still around, had a con to go to, didn't have much time, ended up spending it working on the shun/shunny lora
>>12872 Oh that's cool, sounds like fun. No rush, I was just checking for signs of life and if you were still here/still interested.
>>12873 totally want to make it, I'd like to have a bunch of the features, especially the ping when you get a response and the ability to embed messages in a chain, and a proper auto refresh
(3.33 MB 1280x1920 catbox_acaziq.png)

(3.68 MB 1280x1920 catbox_7d0jvu.png)

(3.88 MB 1280x1920 catbox_hnmpg4.png)

(3.40 MB 1280x1920 catbox_zklr0q.png)

snow bunnies
>>12875 I like the lighting on that third one.
i was supposed to tag the dataset but a friend roped me into trying ror2 for the first time and holy shit, videogames
>>12877 Videogames ae fun
Locon artstyle keeps making hands blobs of noise has anyone tried training on an artist that actually knows good anatomy and saw the same behaviour?
>>12878 >>12877 Only videogames I play these days are gacha and finding good discussion of that in anywhere ever is the hardest trial ever known to man
>>12875 >Baka-DiffusionV4[A]Vae Is this model unreleased?
>>12879 bad hands can have a number of causes >bad artist >bad lora >lora/model clash >lora/lora clash >bad prompt good hands only emerge when everything is in harmony, or when you use an artstyle with thick lines. that said, try this, or the opposite (cfg 15, mimic cfg 5, half cosine up) https://github.com/mcmonkeyprojects/sd-dynamic-thresholding
>>12881 Its unreleased as of now but I gave out my model to some of my friends I'll release it soon I hope.
>>12882 I'm gonna bet its the very low ammounts of images paired with unique artstyle regardless im gonna try what you said
>>12883 i don't know what the [A]Vae in the name means but if it means you have the vae baked in, please don't release it with ti baked in
>>12885 why? If you use an external VAE it wouldn't matter anyways right?
(1.33 MB 840x1256 catbox_u93gmd.png)

(1.24 MB 1256x840 catbox_nxxut4.png)

Interesting this is with only 1 epoch (about 30 repeats per image total) for the s16xue lora. Maybe I'll try linear rather than cosine just to see if more repeats/epochs out of a limited dataset will improve anything
>>12885 It won't matter if u use an external VAE the internal VAE is baked in because I don't want retards asking me why is my colors washed out. [A] means the A variant I have versions from [A] to [G]
(9.64 MB 6840x1214 xyz_grid-0043-2335894453.png)

(9.65 MB 6840x1214 xyz_grid-0045-2335894453.png)

(9.86 MB 6840x1214 xyz_grid-0046-2335894453.png)

>>12882 Half cosine down, half cosine up and no dynamic thresholding
>>12889 They uploaded on the wrong order but 0043 is half cosine down, 0045 no dynamic thresholding and 0046, x-y means x=nothing for the locon and x=2 for the lora i feel like the locon kinda captured a bit better the composition in general of the artist but i think the problem was the really fucking small dataset an there is a bit of variance between the pics + hands being all over the place. Artist in question: https://twitter.com/gar_rooster?lang=en
>>12890 i only got the images until the post of the 13th of january that has mako in it but yeah even on those there is a bit of variance on the artstyle
(496.75 KB 2048x2026 xyz_grid-0004-955092337.jpg)

Implemented a technique shared here that supposedly can help bring out better quality in high frequency details (backgrounds, leaves, bokeh). https://twitter.com/Birchlabs/status/1632539251890966529 I've had mixed results with it and it tends to shift the hue of gens (which is consistent with the examples shared on Twitter too), but maybe you anons will have some better luck with it. https://gist.github.com/catboxanon/eb755be5af637726461d1c7d67b7d89e
>>12892 Interesting
>>12892 I didn't understand anything, but those trees on the right pic sure look nice.
(2.17 MB 1024x1440 catbox_wlay8s.png)

(2.26 MB 1024x1440 catbox_xn44w5.png)

(2.11 MB 1024x1440 catbox_obebwz.png)

(2.02 MB 1024x1440 catbox_0h5l83.png)

WIP model. I've block merged so much that I now cant go above CFG 7 without it looking like CFG20 on NAI..
>>12892 Anons What is your usual CFG Scale values?
What's the cookie cutter prompt to test your model?
>>12897 Masterpiece, best quality,asanagi, portrait, (solo), brown skin:1.3), day time,woman, small breasts, long hair ,(light green hair) drill hair, , masterpiece, best quality, clothes, forest, leaning to the side, bloom, atmospheric lighting, Negative prompt: (worst quality, low quality:1.2), lowres, loli, child, nsfw Steps: 20, Sampler: Euler a, CFG scale: 12, Seed: 2506722060, Size: 768x768, Model hash: f3aa4497f8, Model: Baka-DiffusionV4(FP16)Experimental[B]VAE Baked[HipolyMBW], Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 30, Hires upscaler: Latent This is the default go to prompt to test various models.
the prompt itself its not quite right with missing brackets but I just kept it that way after getting that good seed.
>>12896 i'm usually using 7, probably can go lower though
>>12898 Ah I see, how about lora?
If a model can't go above 10CFG would u still use it? as in above 10CFG would look quite deep fried.
>>12902 I don't usually use LoRAs to test with my model but If I have to pick one it would be my own LoCons : https://civitai.com/models/23720/fajobore-or-stylizedlocon The preview images are made with Baka-DiffusionV4
If enough people uses CFG scales below 10 or just above 11 I could release the model. If the norm is 12 or above then I'll have to work on fixing the CFG scale via MBW. which I really don't want to.
So is the new gradio still fucked? I was in the process of merging the changes into my local version controlled environment, but if it's still buggy as fuck then maybe I'll just wait.
>>12837 Would rather not have specific artists in the mix honestly but you do you anon
>>12905 i haven't heard of anyone using 12+ regularly in months tbh
>>12905 I use 5-7 most of the time personally
>>12905 The norm is usually below 10. But I don't know how your models interact with dynamic CFG scaling threshold though. Some will using it sometimes.
>>12905 I don't go higher than 11~12.
>>12910 Thanks for the feedback abt the CFG scales I think i'm comfortable releasing the model in its current state.
>>12913 whats with the name though
easier to keep up because I'm rapidly replying. its off now though
>>12915 site does support trips if you want to have one but I wouldn't recommend keeping it on since it always seems to apply a big debuff on the poster
>>12916 I see. I'll keep that in mind
Feeling inspired to prompt more nuns tonight
>>12918 For a moment I thought this is generated
>>12919 I was just very bored and looking for inspiration from real artists
(6.79 MB 3072x1536 catbox_xey5va.png)

Fixed the CFG scale to an extent. both models are using CFG12. Model on the left was the one with massive issues I'm having earlier. Still going above 12 will cause problems but I think I can stop merging now and look for prompt wizards for preview images.
How do I change generic tags into more specific tags? Let's say I want a specific school uniform yet the automatic curate only put school uniform tags but it didn't integrated into the lora. Do I just done it manually?
>>12922 Yeah use dataset tag editor or whatever you prefer to mass replace and substitute tags.
man fuck collab this shit breaks literally every day now
>Spent like a hour troubleshooting colab and getting it to run again >It runs again >It looks ugly now and it's not even on torch 2 so god knows what they broke brehs.....
>>12921 nice RTX Off/On image Looks pretty good
(1.99 MB 1280x1536 1.png)

(1.96 MB 1280x1536 2.png)

>mfw i spent 4 hours inpainting these and they still have a bunch of issues is inpainting worth it bros?
What was the fix for random black images? I forgot
>>12928 --no-half-vae
>>12928 --no-half-vae
(1.63 MB 1024x1536 catbox_qmm079.png)

(1.96 MB 1024x1536 catbox_ojxp6w.png)

(2.05 MB 1024x1536 catbox_q19d44.png)

(1.89 MB 1024x1536 catbox_jik81v.png)

>>12863 >It's just too satisfying to add stomach bulge or pregnancy to most of your gens Yes but I do wonder why I'm "suddenly" into this though, must be a age thing.
for somereason the name keeps poppin back up as usual... what do i do to delete auto naming? >>12932
>>12933 Try deleting cookies.
>>12933 well at least we know that if someone came here trying to schizo post with a name, they would get self-exposed kek
>convnextv2-v2 uh.... why not call it "v3"?
(961.93 KB 944x1064 38532-843335608.png)

I'm finding this lora hard to use tbh
whats a good temp for your GPU while training?
If catbox script anon is here, it seems right-clicking on PNGs with no metadata show this error
Is there a recommended colab notebook for locon/licorice/whatever training? I want to experiment with porting style LoRAs to LoCons but I don't have a ton of resources for training locally.
>>12939 its because theres no meta data in those images but the script detects it for some reason its from this post iirc >>12921
>>12941 Yeah probably better to just show "no metadata found" or something
>>12937 It can be a bit rough to use but it can give you some good stuff from time to time.
(2.09 MB 1024x1536 catbox_5yhju3.png)

(2.14 MB 1024x1536 catbox_md5m8n.png)

(2.21 MB 1024x1536 catbox_b2vi50.png)

(2.12 MB 1024x1536 catbox_2p710a.png)

This style LoRA mix is making me a bit worried, ponsuke_(pon00000) at 0.8 and ushiyama-ame-1272it-novae at 0.3
>>12887 shit, sorry, was supposed to keep cleaning the dataset but i got distracted by 2hu and ror2
Is anyone looking at 4chin hdg? Apparently there's 2 anons testing the upper limit res of NAI before it starts producing terrible results and I think this might be helpful with LORA training given we always train with final-pruned
>>12946 Technically this would be unbound if we had access to NAI's new samplers
>>12946 I rarely look at 4chan these days because I always have to live with the fear of "Will this get me randomly ISP banned by a mod or not" but that sounds useful
>>12942 Anons.. If anyone happen to have one of these pls let me know. <lora:artist-Ebisujima Misato_ tan mesugaki:0.8> <lora:artist-zankuro-new:0.5> <lora:artist-aki99 v4:0.5>
>>12951 Thank you!
(1.27 MB 4000x2645 catbox_zbathx.jpg)

>>12949 Thank you, though I found out about it on my own a bit ago. Anyways: >128/128 512x adamw8bit >128/128 768x adamw8bit >32/16 net 16/1 con 768x dadaptation lycoris LoCon seems... less good than the LoRA, though it could just be me using the wrong hyperparameters. I'm going to cook up a LoHA on the same dataset in a bit since apparently it plays better with styles?
EasyNegative seems to be doing something fucky I picked it up because someone here was using it but the thing has the tendency to randomly break some loras/embeds or pictures in generals for some reason
(1.29 MB 1024x1536 catbox_stwsz8.png)

(1.31 MB 1024x1536 catbox_p3zetv.png)

(1.30 MB 1024x1536 catbox_3qdsdu.png)

(1.43 MB 1024x1536 catbox_mxs3bv.png)

working on a tenroy locon, it's kinda so-so, got the general colors correct, and gura, obviously, but I think it needs more time in the oven. >>12954 I don't think i've had that issue with EasyNegative, pretty much all of my gens use it.
>>12954 Negative embeddings/LoRAs are a meme, you're more or less just adding more noise to your generations and they're going to push your gens towards a specific style just like any other network addition.
>>12955 I've had it with a couple loras and some other anon also had the issue but I forget in which thread was it. It tends to work well but it'll decide it hates certain things and break a couple loras/tags.
(1.39 MB 1024x1536 catbox_6gnhye.png)

(1.42 MB 1024x1536 catbox_cza6i3.png)

(1.38 MB 1024x1536 catbox_a67txa.png)

(1.42 MB 1024x1536 catbox_618ysy.png)

>>12955 gura sex as well, just to prove it can be done (despite tenroy having exactly 0 in it) >>12957 unfortunate
>>12948 Funny that you mention this, my mobile data finally got unranged banned from 4chan after 5 years. My mobile hotspot that uses the same carrier but different plan was ranged banned on arrival but can now post on 4chan as well.
>>12954 I stopped using negative embeds when they started fucking with my LoRAs.
>>12954 negative embeds are a double edged sword, on one hand they provide easy one word prompt for general undesired content but on the other hand there's no real way to edit whats in the embed itself so if you run into conflicts with the LoRA or model you decide to use its gonna be a problem.
>>12948 >>12959 Yeah site's fucked. Like, I enjoy the anonymity but rules are not enforced there AT ALL. Either some mod will come like a storm, ban someone at random and leave the aftermath to fix itself or whoever was shitposting and didn't get banned to gloat, ban a couple cherry picked posts and leave the shitstorm to continue at large or simply don't enforce anything and delete whatever they personally don't like without giving a hoot whether it's against the rules or not. And then there's the autists that dedicate their entire lives to just ruining discussion about x or forcing y and there's nothing nobody can do because telling them off just gets you banned while if they get banned then they'll just evade.
(1.68 MB 1024x1536 satou-kuuki-1412it-novae.png)

(1.77 MB 1024x1536 00004-318623155.png)

(2.05 MB 1024x1536 00001-4108342238.png)

(1.82 MB 1024x1536 00033-611278468.png)

>>12944 https://civitai.com/models/9652/lactation Should try the lactation lora too. I love the combo of stomach bulge+lactation on cunny.
>>12962 4chan has been a counter-counter culture honeypot for well over a decade, the tranny jannies probably do it for free but the behaviors you described are typical JIDF tactics, they pay quite well and they don't even have a reason to hide any of it, see /v/'s shilling and forced derailments.
turns out the penis on face lora also does penis over eyes well
>>12966 Well it is listed as one of the tags in the description. There's a lot it can do I don't think most are aware of.
my UI gets very fuckery when I turn on Show extra networks, does anyone know how to fix it? I have lots of lora/folders
>>12966 oy vey *rubs hands* good job goy
>>12965 It's not paid, it's just autism. Check any /vg/ general for example, if it's a game with people posting on it then it most likely has someone forcing that whatever they like is good and whatever they don't like is bad and then they turn the thread into their personal circlejerk or safepace for whatever they wanted it to be. Doesn't even have to be /vg/, literally happened in /h/ too.
(2.14 MB 1024x1536 catbox_dpcqtu.png)

(2.21 MB 1024x1536 catbox_neewhc.png)

>>12964 That does sound hot, I'll give it a try later.
(197.52 KB 1391x933 FisZDN5WAAAYF1H.jpg)

>>12970 It's absolutely paid my dude. /v/ is just one of the more egregious examples. JIDF pays per click, post, view, any sort of engagement. Discussions are constantly controlled and derailed.
Whats the prompt/tag if I want to gen more filled out girls rather than skinny?
>>12973 something like curvy, plump
(1.15 MB 840x1256 catbox_56xkdf.png)

(1.28 MB 1256x840 catbox_gb6osg.png)

(1.48 MB 1256x840 catbox_n0uucs.png)

(1.49 MB 840x1256 catbox_onkh7b.png)

WIP
>>12973 thicc (yes this word works) wide hips,thick thighs,large breasts,curvy
(576.63 KB 512x768 catbox_2e8qiv.png)

(586.55 KB 512x768 catbox_0qd1wh.png)

(609.95 KB 512x768 catbox_i5yc8q.png)

(609.95 KB 512x768 catbox_i5yc8q.png)

steampunk/mecha bunnies
(1.02 MB 4000x2645 catbox_zapdl0.jpg)

(1003.44 KB 3648x2411 catbox_jssfmn.jpg)

I'm uploading my experiments if anyone would like to play with them: https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/r4pBmbIb So far I can't really see any improvement in quality in the LoCon (lycoris) or LoHA version vs the original LoRA. First image is with Latent upscale, second is no hires fix.
>>12945 i really need to stop getting distracted holy shit
>>12937 >>12943 If you go above cfg 5 with this thing it just goes weird, also it really likes aom3 from my limited testing but it does great things.
I think I might use AOM2 hard as the base for based66 desu
>>12980 >If you go above cfg 5 with this thing it just goes weird Hmm maybe that's why my gens are wonky with that LoRA, my cfg being too high.
>>12981 There's anything 5 now too
>>12981 I quite like AOM3A1B/A3 tbh, I think A1B might be suitable for mixing
>>12984 I only ever use AOM3A1 personally
Why hasn't anyone merged NAI-full and NAI-curated? Wouldn't it be so much better as a training + general model?
>>12981 not a fan of gape which is the only reason to use hard over nsfw but it's your model and you haven't let us down yet
>>12985 not too willing to use it for training, I personally don't think Lykon's stuff is that good, most of the time being overbaked really hard. `I usually find good results at 0.65 weigth that I later offset to 1.` I don't exactly know what that means, but it sounds like he bakes his shit to oblivion
>>12989 actually, to be entirely honest, if I was told I had to train on a different model, I'd just train on based64 or 65 because at that point I can't be bothered to care about using my LoRA I make on a different model.
>>12985 > training on a merge kys
>>12991 Alright, which one of you let a 4fag in?
>>12985 He really should add an explanation of the benefits of training with his model, otherwise it just seems like schizo shit.
>>12985 Why the fuck would you do this, don't niggers know that a lot of shit gets fried at 1.0 when you train outside of NAI? Fucking faggot I just want to write a comment calling out his shit and he's going to be the reason why we're going to get more shit LORAs
>>12994 leave it dude, arguing with retards always ends up being a waste of time
any anons around know where i can find old versions of danbooru archived? i remember this being mentioned a few months ago and i'm looking for an artist that isn't on there anymore
>>12995 Fine, but I'm going to test his shit out anyways and compare
>>12997 looks like shit from what I've seen, got this from the 2hu discord
>>12998 what is this image even comparing
>>12999 did you even read the last 10 posts jesus christ
>>13000 it doesn't say which image is what model or what the differences between the LoRAs' training is
>>12998 Can we get that faggot on civitai banned for misinformation leading to shitty LORAs?
Been using latent upscaling since day one and I'm now giving some other upscalers a try for Hires fix and I'm currently using 4x foolhardy Remacri but I wish I could get the gens a bit sharper, anyone got some other upscaler recommendations for me to try?
>>13003 Animesharp I guess? I usually just inpaint eyes if they are too blurry
>>12980 It seems to fry a lot of loras
>>13002 that shit sucks but like, its still better than 98% of all civitai garbage lmao >>13001 the shit one is the anylora one
>>13004 I'll give it a try but I guess I'm so used to the sharpness of latent.
>>13002 Ooor we could just NOT fucking care in the slightest? I don't get you people sometimes. First you (general you before start niggering out) trash civitai day and night (rightly so) and blame it for all the shit garbage loras that come out of it, then you blame it for the influx of braindead normies who flood 4 chan /hdg/ every day and treat it more like /g/ /sdg/, then you decide "oh actually 4chins /hdg/ is a dumpster fire, fuck that" (fair, I'm with you on this) and now you're like "UGH CAN WE GET THAT GUY BANNED HE'S SPREADING MISINFORMATION" as if that day old garbage is the reason why normies have ruined /hdg/ and flooded the internet with shit loras. Either cheer and let it burn or go back to /hdg/ where you can act like a faggot all day long
>>13003 If you have Photoshop you can use this and avoid trying out 3458 upscalers, it's a 2 seconds fix https://anonfiles.com/gah6y5hfz6/Blur_Sharpen_atn
>>13007 honestly, I haven't really ever used latent, I've only had bad luck with it. I pretty much only use remacri at this point
>>13008 im not reading this, take your meds I can smell the schizo without even having to read it
>>13011 Go back to 4chan where you can act like a faggot and whine about civitai all day
>>13009 Nah I have GIMP but I can look for something similar for it. >>13010 Latent has served me well but it can be a lot of gacha and R-ESRGAN 4x+ Anime6B seems alright.
>>13008 mucho texto
>>13011 >>13014 All that anger aside, It's not exactly wrong though, I can see where anon's coming from. If it's not that guy then it'll be someone else so there's no point in caring because nothing's really going to happen. There's idiots in the internet everywhere so getting angry at every single one of them and why waste precious energy or time on them when you can use them on yourself, people you like or a community you like? Instead of worrying what others are doing, improve yourself or try to make things even a little bit better for the people or communities you like.
>>13015 >improve yourself never knew an anon on 8chan of all places aside from a Vtuber would say that too kek
>>13016 People on 4chan spend too much time flinging shit and I've flung my share of shit too but this isn't 4chan and I imagine people can at least attempt to listen to each others points of views and talk it out. Personally, I do agree it's pretty stupid to worry about what some idiot on civitai, youtube or twitter is doing because there isn't any shortage of idiots and whacking one does nothing because ten will sprout after them. It's better to just point out that it's wrong so others don't fall for it and work on stuff that's actually worth it.
did anyone checked the the option Kohya added? --min_snr_gamma https://github.com/kohya-ss/sd-scripts/pull/308
>>13015 I'm angry because these niggers shifted from "UHM NO DRAMA PLS NO TALKING ABOUT 4CHAN HDG WHAT HAPPENS THERE STAYS THERE COMFY VIBES ONLY" to complaining about hdg whenever it's schizo/jewmerican hours and now we're arguing about "muh civitai misinformation" with niggers who obviously just came here from hdg to stir shit up - NONE of the regular 8chan hdg posters care about civitai in the slightest aside from trying out the newest nahida loras just in case they finally got the eyes right I hope they're just here to stir shit up 'cause otherwise they need meds for their bipolar disorder
>>13018 >tranny example grid
>>13018 but I guess easyscripts didn't add this yet since it came out just today, >Fix issues when --persistent_data_loader_workers is specified. >The batch members of the bucket are not shuffled. >--caption_dropout_every_n_epochs does not work. >These issues occurred because the epoch transition was not recognized correctly. Thanks to u-haru for reporting the issue. >Fix an issue that images are loaded twice in Windows environment. >Add tag warmup. Details are in #322. Thanks to u-haru! >Add token_warmup_min and token_warmup_step to dataset settings. >Gradually increase the number of tokens from token_warmup_min to token_warmup_step. >For example, if token_warmup_min is 3 and token_warmup_step is 10, the first step will use the first 3 tokens, and the 10th step will use all tokens. Token warmup seems weird tbh and I still don't get what that --min_snr_gamma is supposed to do
(1.03 MB 4000x2674 xyz_grid-0020-3921083037.jpg)

(1.27 MB 3840x3252 xyz_grid-0024-3517002062.jpg)

(1.11 MB 4000x2674 xyz_grid-0021-3921083037.jpg)

(1.14 MB 3840x3252 xyz_grid-0023-3517002062.jpg)

>>13021 fun times, I'll get on the update a bit later, currently baking a tenroy lora, I think bake 2 is better than bake 1 but only marginally so, will probably release somewhat soon, I'm happy with it I think, the colors really pop, making it a goto for me now, because I personally really like colorful things
halfway done with the s16xue dataset, the hardest thing about editing all these pics has been my dick
>>13022 the difference between the tranny face example grid between the models trained with and without --min_snr_gamma 3 or 5 only have very slight differences desu but if Kohya committed to it might as well add it, I just hope that mixing two LORAs trained with it doesn't do wonky shit like noise offset did
>>13024 yeah, I mean, if it provides any better results then I think it's worth it, assuming it doesn't break things of course.
>>13023 that and separating multi-char pics, cleaning up the hearts and text and waiting 30 to 45s per pic with superpng i can't fucking wait to quadruple my cores and octuple my threads >>13013 no idea if gimp has the same blending modes but 1: make two copies of your layer (so 3 in total) 2: invert the topmost layer and set the blend mode to vivid light 3: apply gaussian blur to said layer (idk if gimp lets you change this later, PS does with smart filters), in PS strength 0.3 is enough with 2D stuff, you'll get halos on higher strengths but feel free to try it 4: group the top 2 layers and change the group's blend mode to overlay, mess with the group opacity if needed or just get some ancient version of PS like CS3 portable so you can automate this
>want to retrain LORAs >my old datasets are cropped fuck my life
(1.69 MB 1024x1536 catbox_0dyr6l.png)

(1.85 MB 1024x1536 catbox_10asck.png)

(1.93 MB 1024x1536 catbox_akuh14.png)

(1.86 MB 1024x1536 catbox_7b55fm.png)

>>13026 Thanks, I will give it a try but these gens look pretty good with R-ESRGAN 4x+ Anime6B.
>>13028 np also i gotta tell you, those nips look kinda disgusting lol, what style is that?
>>13030 Rude! <lora:ponsuke_(pon00000):0.8> <lora:ushiyama-ame-1272it-novae:0.3>
>>13031 they look alright in the thumbnail but they kinda fall apart up close
(4.94 MB 3360x1459 catbox_qe0e2l.png)

(5.03 MB 3360x1459 catbox_2s80np.png)

>>13023 I hope you have better luck with training. I'm still only able to get anything consistent with unet-only or low epochs
>>13033 not training it myself, not right now at least i'm cleaning up the dataset for the fishine/spacezin anon, his style loras are fucking magical if he can't do it then the only one left is the guy who trained the original cutesexyrobutts lora
(1.27 MB 840x1256 catbox_601sd2.png)

(1.46 MB 840x1256 catbox_dr03de.png)

(1.47 MB 840x1256 catbox_bnet75.png)

(681.01 KB 768x512 catbox_yup76l.png)

>>13034 Ah I see. The training results I've gotten from this lora have inspired me to prompt for nightmare fuel and body horror using other artist loras which has been pretty interesting.
>>13035 WHY DO I CLICK SPOILERS BEFORE READING THE POST WHY DO I CLICK SPOILERS BEFORE READING THE POST WHY DO I CLICK SPOILERS BEFORE READING THE POST WHY DO I CLICK SPOILERS BEFORE READING THE POST
>>13003 remacri, 4x ultrasharp, and lollypop are the only three i use other than latents
>>13036 you would've clicked it anyways.
(850.09 KB 960x640 catbox_fj3sof.png)

(984.30 KB 640x960 catbox_39apd9.png)

(1.02 MB 640x960 catbox_xvr9ro.png)

(1.31 MB 840x1256 catbox_h8ctil.png)

>>13036 DO NOT CLICK IMAGE 4 EXTRA SCARY
>>13039 not gonna click any of them, thanks
>>13034 which fishine are you talking about? there's three in the repo. >>13035 i would sincerely enjoy that lora.
>>13039 >>13035 My curiosity is getting the better of me.
(1.48 MB 1024x1536 catbox_qbb1hj.png)

>>13039 Hot my dude first 3 were okay but not sure what went wrong with number 4
>>13041 Which one? If you need a specific one it should already be on my Mega >>12820
>>13041 this fishine LoRA: https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/f94VWZJa the other two are one my friend made really sloppily and one from an anon's whose dataset I merged with mine and cleaned to make this one
>>13045 making progress on the s16xue dataset, i have another lappy+texas set and the lappy+texas+bagpipe one and then i'm done gonna leave the grani set in even though the style is a bit different, i wouldn't include it in the training data tbh i'm cutting up the multi-char ones just to inflate the pic count, not sure if having only one character per pic is beneficial when it comes to styles
>>13046 lol you made this sound like an arduous task and i check the dataset it's literally 70 images. come on man
>>13047 edit shit on 4 hours of sleep and on a cpu that takes 1 minute to save images then get back to me
>>13048 i've edited over 1k image datasets, and that includes more than just basic ass simple backgrounds with 2 second spot healing watermarks
>>13049 good on you
Did this anon seriously try to play the fucking one up game?
>>13051 hope it's not the fishine anon, guy's too good to be this petty
Why can't we be friends? Stop hurting each other
>>13049 >>13051 >petty dude you're blogposting about editing 70 images kek. and if you think it's difficult to imagine editing datasets with hundreds of images, i don't know what to tell you
>>13054 i've edited a 340 images dataset and you don't see me bragging about it, if anything i'm doubting your claims since anyone BUT a petty bitch would be like "yeah whatever it's tedious and boring you'll get through it"
(1.17 MB 768x1152 catbox_kre4j8.png)

(1.16 MB 768x1152 catbox_3vewsa.png)

(1.17 MB 768x1152 catbox_m6odki.png)

(1.15 MB 768x1152 catbox_peh626.png)

horny bunnies Using some work in progress wildcards I help making
Man I wish I only had to painstakingly edit 70 or 340 images…
>>13055 i'm not bragging about it. the anon was the one who started going "dude you've never done it before, get back to me when you have". besides that, he's a shitty attention whore tourist who's blogposting so i'm not going to be nice regardless >>13057 right?
>>13058 we kind of need 'tourists' around here though.... This shouldn't be a circlejerk for a select few or a couple elitist few, people should be able to post whatever.
>>13056 Nice bunnies.
>>13059 >we need tourists Fair >should not be a circle jerk of elite few Never really saw us that way anyway >people should be able to post whatever not this petty shit
>>13058 lmao i said "do it when you're sleep-deprived and on a shit pc that takes a minute to save each image" but sure >besides that, he's a shitty attention whore tourist who's blogposting so i'm not going to be nice regardless i've been here since the third thread >>13061 >not this petty shit pot calling the kettle a nigger
>>13061 its an anonymous imageboard, the point of being anonymous is so we can say dumb shit or vent under the guise of anonymity Not to act like someone committed a cardinal sin and argue about it whenever someone says something dumb
>>13050 yeah anon, you're the only person in the world that has done stuff sleep deprived or with a shit pc. also you don't have a shit pc if you're the birdfag, and if it takes you more than a minute to save an image, you have bigger problems lol
>>13062 >pot calling the kettle a nigger alright anon, I won’t step in to diffuse you and the other anon’s bullshit next time
thread feels really hostile today you guys need to chill out some
>>13064 >yeah anon, you're the only person in the world that has done stuff sleep deprived or with a shit pc do you want a trophy for editing 1k images? are we supposed to give you money or something? >also you don't have a shit pc if you're the birdfag my ram isn't here, can't do shit about it and i'm stuck with the 11 years old i5 until then
>doing tag processing >see what Booru considers as “blurry” >broad range of blurry background, foreground, depth of field, soft focus, etc >no actual full on blurred images because booru policy is to not upload bad images like that Someone at NAI is a retard
(1.49 MB 1024x1536 catbox_b3hjti.png)

(1.44 MB 1024x1536 catbox_qxpb69.png)

(1.66 MB 1024x1536 catbox_1ut3zh.png)

(1.75 MB 1024x1536 catbox_2e5j2a.png)

I finished not genning gawr gura hentai for long enough to actually gen preview images for the mega and civitai for Tenroy, so it's up and running now. A few notes, small dataset, large amount gura, will be overfit a bit. No sex in the dataset, just nudes (primarily gura), so might be a tad tough. tenroy is stupidly inconsistent, so it doesn't exactly look correct, but the colors are mostly there, so I'm happy with it. I guess you can say this is more or less "Tenroy inspired" because it basically only gets the colors correct for the most part. of course I had to include at least one gen of Hifumi here (though the hair isn't exactly correct unfortunately) anyways, links: https://mega.nz/folder/CEwWDADZ#qzQPU8zj7Bf_j3sp_UeiqQ/folder/qUJA3T7T civitai: https://civitai.com/models/25422
>>13067 >do you want a trophy for editing 1k images? are we supposed to give you money or something? no and i never asked for one. literally bullshit lol. i was just saying you're blogpost is dramatic and stupid, and you got defensive and pretended like no one has done that shit before >>13066 i'm done arguing with the retard after this post anyways
>>13070 >no and i never asked for one. literally bullshit lol. sure sounds like you did >got defensive and pretended like no one has done that shit before how petty do you have to be to get this upset at a throwaway comment? just because i didn't word it differently like "yeah anon i'm tired?" fucking christ dude
>>13070 >i was just saying you're blogpost is dramatic and stupid you kinda are too for getting this offended over anon 'blogposting' or complaining about editing images shitposting takes two people and it's usually both sides equally who are at fault. This definitely never deserved to get this big since it's a pretty dumb thing to argue about.
>>13069 is that a miyu lora that looks like miyako
>>13073 no, the dataset just have about 7 images of miyu in it, so I can basically prompt it wholesale.
>>12927 do what makes you happy bro >>12946 Well nobody is pushing past 1MP training anyhow anytime soon given the exponential cost increases. screencap this when I'm wrong about exponential pace of AI again >>12980 Another AOM3-penis on face report, glad I wasn't going insane. AOM3 seems quite hit or miss, I remember early on it seemed to improve some of my LoRA while ruining others. Also catbox on those? it was too late to reply on /h/ >>13003 Nothing will be as sharp as latent, just the nature of the beast. Sometimes I go back to latent and am amazed at the detail and clarity until I notice the nipple elbows and shit. >>13021 >>The batch members of the bucket are not shuffled. What the fuck... Maybe that's why my second small training dataset did so much worse than my first.
>>13075 and before someone akshuallys me, it's polynomial growth in computation for training the same unet at higher resolution. I think.
(68.75 KB 512x704 catbox_s3vv1t.jpg)

jpeg no metadata test
>>13077 but do archons need oral hygiene
>>13078 They do get cavities apparently, see Raiden's lines. But they can just fix up their physical vessels.
>>12566 Sorry but the error message contains no useful information on what I suppose is a timeout, that's by design of tampermonkey & co. Maybe you should consider directly upload to catbox.moe main page next time on an error like this to see if the problem persists and for more information. >>12939 There should be an error message that says "parameter chunk not found" in your console but yeah I agree it's too technical and hidden The problem is the script doesn't download the entire image, it downloads the first few kBs then try to make sense out of the fragment, so there's an array of errors that naturally occurs. I updated the script so now it distinguishes "no metadata" from other generic errors. update / install link: https://gitgud.io/GautierElla/8chan_hdg_catbox_userscript/-/raw/master/out/catbox.user.js
>>13069 Great lora, thanks My gens of miyu (with a character lora even) also sometimes have white hair when prompted otherwise
>>13082 >>13081 I think the white hair is actually because of the huge amount of gura in the dataset, tenroy actually draws miyu with purple hair for some reason, I found that to be true when I was using the really old miyu character lora I had gotten long ago. after taking it off, my gens weren't baked though lel. glad you enjoy though, I thought that the color of tenroy was just too good to pass me by. I plan on splashing it into whatever I gen from now on (unless dark and dreary of course) because it just is so close to that color profile I've been trying to get
>>13084 pretty I need to get off my ass and scrape-and-bake some precure
>>13084 decent monmo, just needs the scars and maybe a bit more toned
are there any tools to merge loras together? or do I have to make one myself I want to make a spice mix of a bunch of different artist styles that I can reuse
>>13087 Right there anon https://github.com/kohya-ss/sd-scripts/tree/main/networks Easy scripts has a variant as well if you use that.
>>13088 thanks, missed that
>>13088 >>13089 hmm, that script doesn't seem to merge loras correctly you can't add together the a and b tensors of a lora and expect the outputs to work, but that's what it's doing you need to either concatenate the a and b tensors or make it into a full weight matrix and save that instead
>>13090 make a PR and i'm sure kohya would approve it
>>13090 wakarimasen lol The LoRA author on leddit did say something about needing to reach out about lora combination but I don't know if he meant webui, additional network extension, or what.
>>13091 working on a correct script rn, ill post when im done but I don't have a burner github acc to make a pr with >>13092 it's just how the math works 1 lora = w + a1 * b1 2 loras = w + a1 * b1 + a2 * b2 (a1 + a2) * (b1 + b2) is not a1 * b1 + a2 * b2 think foil but with high dimensional matrices instead of numbers so you have to do something different to merge them
>>13093 Oh, is that not what's happening here? https://github.com/kohya-ss/sd-scripts/blob/main/networks/svd_merge_lora.py#L84 At any rate I need someone with a burner github too, been meaning to file an issue about prior preservation loss
>>13094 oh ffs that's literally what I was just implementing I was looking at the "merge_lora.py" and "merge_lora_old.py" files... let me check if that works
Do generating on batch compared to 1 by 1 will yield strongly different results?
>>13097 jesus I'm impressed the distinctive eyes are pretty much retained
>>13101 This is like the 4th time I come here to share one of my posts and somebody's already done it for me, lol. Anyway I have a bit more writeup on how to use it here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/9084
>>13103 maybe you should post them here first kek
>>13103 So now I just need to use your LoRA right? I wonder how that works on other models, mainly NAI model.
easyscripts anon when do you plan on updating to all the stuff Kohya did yesterday? I'll be working on doing proper tagging and getting new training data in the meantime before I bake any new LORAs
>>13106 I'm working on it now, bit busy today, but I will get it out
>>12683 >>12684 >>12725 Looks like this is coming to Kohya soon. Will embeds make a comeback? https://github.com/kohya-ss/sd-scripts/pull/327
>>13108 NTA but, it's possible, I can see it being an addition to LoRA more than a thing on it's own. But hey, if it produces similar quality to dim16 LoRA, then i'm all for it, I'll start baking both a lora and an xti and bundle them together
>>13109 so this will be like the old meta of making a textual embedding along with a LORA kek, good to see it come back but at least if it's implemented with sd-scripts it can be faster to bake
>>13110 for sure, that reminds me yet again that my scripts need to be updated to support the rest of the training things from kohya
>>13110 >the old meta of making a textual embedding along with a LORA I still do that, even though I don't know if the embeds really help with anything, the job just doesn't feel complete until I've got both a lora and and embed.
>>13110 The schizo in me finds it funny that /hdg/ was just talking about embeds and hypernetworks yesterday(?) kek
(1.73 MB 1152x1536 catbox_1smwbm.png)

(1.92 MB 1152x1536 catbox_dluwwk.png)

(1.83 MB 1152x1536 catbox_gog93u.png)

(1.67 MB 1152x1536 catbox_5ov5la.png)

Testing and comparing some different style LoRA mixes on my months old mesugaki prompt, this is pretty fun.
>>13114 seems like they are having fun. that reminds me, I need to make a wildcard for style LoRA so I can just let it randomly select 2 or 3 and just let it run for a while, might get some good results out of it. in other news though, I am basically done with the update, just need to actually upload it update the readme. I'm putting the update to lora resizing on hold for now to get this out earlier. if you're curious, the lora resizing update is just a simple update to make multiple runs very easy, two ways to do it, either provide a txt file in a specific format, or use the popups to set thresholds that it evenly spaces out according to your other settings, you'll what I mean when I'm done with it, should make doing 40 fro resizes really painless.
hey all, finished the update. changelog: update to support new args by sd-scripts * Updated both LyCORIS and sd-scripts * Added all new sd-scripts options to the `argslist.py` * Added min_snr_gamma to the popups because it seems like it's going to be useful NOTE: Because of a new config file, you have to run upgrade.bat. it will otherwise not work for you. I was having an issue where the library files weren't being properly tracked, running upgrade.bat to re-run the requirements.txt fixed it though, so before you try and run it, make sure to run upgrade.bat
>>13114 damn cocky AI generated brat needs correction!
does anyone find that training genshin characters is really difficult? It's so hard for the Lora to get every little detail of the character correct
>>13119 isnt it because the dataset is trash? not shitting on you, it's more on the artists for drawing their headcanon of the character and not the actual character design.
>>13120 no, this still happens even after I manually curate the dataset to contain only "canon" character designs.
>>13119 Can you do the red Nahida anon used? 892x1232?
>>13122 I am Nahida anon, I'm trying to perfect my Nilou lora, but I'm having to make a tradeoff between how overpowering her default outfit is vs how accurate her outfit is which is a choice that I didn't have to make for Nahida. Maybe it's because Nahida's dress is much simpler than Nilou's weird slutwear
>>13123 Makes sense, did any LORA maker on the Chans or Civitai manage to get her outfit details right?
NAI apparently used >256-1024 For bucketing, maybe I’ll try 768 or 896 with 1024 and see how the details turn out with LORAs trained off of it
(1.60 MB 960x1440 catbox_upb1bz.png)

(1.68 MB 960x1440 catbox_lzovpm.png)

(3.72 MB 1920x1676 catbox_nxlztv.png)

(3.36 MB 1920x1676 catbox_hq0me2.png)

>>13119 I don't think any lora is ever going to get smaller details 100% correct even with overfitting. I've just kind of accepted that since 99% of my prompts don't include most of their outfit anyway
>>13126 Maybe that’s when doing a combo with the new textual inversion embeddings once Kohya implements it can help with more precise outfit details
(1.55 MB 1440x960 catbox_beneco.png)

>>13127 Well shit it looks like webui git pull broke composable lora
(1.20 MB 960x1440 catbox_ersq0p.png)

(1.33 MB 960x1440 catbox_mixuai.png)

Well at least hires fix seems more stable, I can reliably get 960x1440 on 8GB vram without errors
>>13119 It's actually just insanely trash images. I cleaned up and made a barbara for someone and the thousands of images of her they all largely suck or don't look like her. Genshit has like the worst fanbase for good art from stuff I've seen.
Does anyone know why openpose is not working for me on controlnet? Rest of the stuff works so dunno whats going on
How do you merge loras? Want to try to salvage this.
>>13133 you can use the checkpoint merger built into webui but expect unforeseen consequences when merging LoRAs
>>13134 why would you do this instead of using the scripts kohya provides?
>>13135 I'm trying to do kohya scripts but I can't actually find it now and I couldn't see it on his github. But last time I tried running it it wouldn't merge.
think it's more accurate after I went back and autistically tagged everything ...except I should have tagged twintails as short_twintails
now that I think about it half of the fanart with exposed skin doesn't show her scars. and I noticed there's even some continuity errors in the anime where they weren't drawn in.
>>13137 Ah k I do see it in my folder so how do I actually use this thing?
>>13142 I actually found that and tried it and couldn't get it to merge loras it kept throwing back errors.
Does anyone happen to know if the Civitai page ExpMixLine had any info on how it was made? I wanted to get some info for mixing purposes but the page is nuked and even internet archive's records were exempt.
>>13140 If you are using the easy training scripts, run the lora_merge.bat script that I have. it'll walk you through the process, as it uses popups
>>13145 Yeah that's becoming my last resort if I want to do this but I don't want to brick my kohya directory in some unforseen way since I like just using my script to train loras.
>>13144 i gotchu senpai ### \[The fp16 pruned version can be found in the list by clicking the arrow next to the download button.\] This is an [ExpMix](https://civitai.com/models/11847/expmix) variation model. Thick lines and simple coloring to give a 2D look. Example picture were created with medvram, xformers setting. Merge Source(v2) ---------------- ExpMix\_Line ChikMix SamDoesArts (Sam Yang) Style LoRA ligne claire style(cogecha焦茶) LoRA Motomurabito Style LORA Kidmo//style LoRa Recommended Settings -------------------- Sampling method : DPM++ SDE Karras Clip skip : 1~2 Hires.fix upscaler : R-ESRGAN 4x+Anime6B Denoise strength : 0.4~0.65 CFG Scale : 6~10 Recommended Prompt ------------------ Prompt : ((masterpiece:1.2, best quality)), 1lady, solo Negative : (worst quality, low quality:1.4), multiple views, blurry, multiple panels, watermark, letterbox, text should probably start scraping civit hourly by now with all the nuking going on...
>>13146 I mean If you already have it installed, then it should just be there? if you don't because you didn't update then you can install a new version in a different folder
>nobody's done a gwen lora
>>13109 I don't see a reason to use them together, it would probably negate the main advantage XTI is purported to have over Lora (better composability). >>13119 Mihoyo character designers actually futureproofing their design work with as much intricacy as possible Some random 00s anime character that needed thousands of keyframes drawn over the course of their show is never going to compare to a couple CGs and a make-once 3D model. >>13123 And honestly even your LoRA didn't nail outfit either >>13125 I recently cranked maximum resolution all the way to 1920 with 1024x1024 as the area basis.
>>13147 that would be a good idea, I have tons of LoRAs that dont exist on there anymore, someone should at least archive even if some of them are baked to hell shit.
>>13148 I bit the bullet, installed your ez scripts to run training because I'm having an api issue nwo or something with the other script I'm using. I've run through the setup 3 times trying to make the .json to load up and it doesn't save it.
>>13147 I fucking love you, no homo Didn't expect it to be a ton of LoRA merges. You wouldn't happen to have their v1 (ExpMix\_Line) source? >should probably start scraping civit hourly I appreciate your service I did an Add Difference with my ufotable model and ExpMixline v2 (and nai-full naturally) and it added a great deal of depth to it. Some stuff still looks crunchy or has weird bloom but I know that's from my core data that still needs to be cleaned up and tagged up properly, especially the porn. Also figured out that by prompting my ufotable trigger tag to 1.4, it seems to completely drown out the competing data at the cost that I need to prompt (ufotable:1.4) at the start of every 75 token chunk which is a pain. The only for sure negative the mix did was fuck up the genitalia so I need to look into that. Other than that, ExpLineMix is a really good model to mix with.
>>13126 I think nahida anon's approach is the best option we have now, that is to train a separate lora for the details you care about One of my character lora is only flexible up to epoch 7, but I keep the epoch 12 around for inpainting
>>13153 Sure 2D aime style model Merge Source(v2) ---------------- ExpMix Counterfeit-V2.5 atdanStyle Lora yoneyamaMaiStyle Lora Motomurabito Style LORA igne claire style(cogecha焦茶) LoRA Recommended Settings -------------------- Sampling method : DPM++ SDE Karras Clip skip : 1~2 Hires.fix upscaler : R-ESRGAN 4x+Anime6B Denoise strength : 0.4~0.65 CFG Scale : 6~10 Recommended Prompt ------------------ Prompt : ((masterpiece:1.2, best quality)), 1lady, solo Negative : (worst quality, low quality:1.4), multiple views, blurry, multiple panels, watermark, letterbox, text I'm sitting on a 210 MB metadata dump of (most) of civit's models, if anyone wants it
>>13155 thanks, it triggers me that they put the name of the mix above the merge sources. it seems that the two big contributors to both of his mixs are motomurabito and igne claire loras. I'm not too familiar on what the proper way to mix Loras into ckpts are. >I'm sitting on a 210 MB metadata how many model pages is that?
>>13156 (me) fuck excuse the accidental reddit spacing
And here's a partial list of removed LoRAs, there will be gaps because I was lazy with scraping. Need to set up my server again so I can run it forever. https://rentry.org/zx297/raw >>13156 Looks like 124 pages of 100 models each from the output of my script
>>13158 nice, we can see what's missing so we can re-up on a different file host, just in case, good job
(1.71 MB 1152x1536 catbox_ol86us.png)

(1.94 MB 1152x1536 catbox_y2ds7t.png)

(1.73 MB 1152x1536 catbox_rwwo48.png)

(1.66 MB 1152x1536 catbox_c01tf6.png)

Oh no
>>12742 vram has always been more futureproof than performance. Hell, Tesla p40 (essentially a 1080 w/16gb) is still a respectable card for gaming (also does well on llms and diffusion, for $200 on ebay). for training anything that'll take more than a few hours on my 6800xt i just rent by the hour, though.
>>13158 >Bionicle LORA >Batman >Michelle Obama >Glass Sculptures The first three make sense, but I want to believe there's one civit mod who really hates glass.
wao who knew switching motherboards would be so easy, can go back to doing exactly what I was only thing. Only crazy is seeing how close 3 slot nvlink 3090s are kek
(2.13 MB 1024x1536 catbox_6i95zk.png)

(2.29 MB 1024x1536 catbox_8tqmu1.png)

(2.22 MB 1024x1536 catbox_uhtdhe.png)

(2.30 MB 1024x1536 catbox_nf0tvp.png)

It's really hard for me to give up on Latent upscaling, Latent 2.0 amazing hands edition when?
>>13164 oh damn, you did it?
>>13166 nah just managed to connect two for now, still waiting on the bridge but yeah confirmed I can use a 3 slot bridge, but with how close they are that poor top card gets the hottest kek
>>13167 probably worth it to power limit all 3 to something like -10 or -15% and benchmark them
>>13168 yeah making sure to see how much I can undervolt it at the base clock until I crash, worth it to save on some temps and electricity usage
>>13169 undervolting is very fiddly, just move the power target/limit slider in afterburner or precision x or whatever you use
>>13164 So which board did you end up getting to NVLink?
>>13171 b550-XE Asus ROG strix, all the other boards I was looking at were 3 slot bridge tier from the distance of their 16x PCIE gen 4 lanes, only the 500 buck plus EATX board had the right distance for 4 bridge NVLINK
>>13172 also it was the only board on sale for 100 dollars less new so I got it, the other board I was planning to get was the hero series but they were in the 400 range, the EATX board that would have given me the 4 bridge slot distance would have been the EVGA Dark but it's currently scalped on the used-new market for close to 600 in my area on ebay
did anyone test out the new stuff kohya released with the recent pull?
>>13174 I'm not pulling until he adds torch 2.0
>>13175 don't think he will, pretty sure his stance is that he wont be adding it. at the very least, the easy training scripts supports torch 2
>>13175 >>13176 git pulling has nothing to do with torch 2.0? you don't need him to 'add support" to use it
>>13178 I know, I'm just mentioning that they aren't gonna support torch 2 directly
So if I wanna git pull right now, what do I need to do extra to make sure shits working?
>>13182 model and loras please?
(1.99 MB 960x1440 catbox_us05bk.png)

>>13184 based64+ushiyama ame+alkemanubis https://files.catbox.moe/cqywhi.png https://files.catbox.moe/4aimts.png https://files.catbox.moe/s57c26.png https://files.catbox.moe/63ksxz.png i should probably install the catbox extension on mobile, proompting while at work
>>13186 based, thank you
(2.14 MB 960x1440 catbox_oebeeo.png)

(1.99 MB 960x1440 catbox_ksoykz.png)

(2.00 MB 960x1440 catbox_uo9fb9.png)

(2.02 MB 960x1440 catbox_qcygxw.png)

(2.09 MB 960x1440 catbox_g9sqqn.png)

Ah yes the water that turns you into loli Chitanda
>>13189 I'm very curious to know what's in the water
(2.10 MB 960x1440 catbox_ln7d40.png)

(1.87 MB 960x1440 catbox_od330d.png)

>>13190 Only the finest juices
(1.80 MB 1440x960 catbox_uvqh0s.png)

(2.03 MB 960x1440 catbox_3arw4r.png)

Honestly surprised I haven't seen many if not any underwater prompts
>>13189 taking a running swan dive into that cursed spring
(2.00 MB 960x1440 catbox_70ktvy.png)

>>13193 Careful not do dive too far down
(1.98 MB 1536x1536 catbox_aawkau.png)

(2.15 MB 1536x1536 catbox_lupk06.png)

(2.17 MB 1536x1536 catbox_7pbnex.png)

(1.71 MB 1536x1536 catbox_u7eqcb.png)

I can't stop fucking cooming
>>13195 >can generate whatever he wants >still gets cucked by niggers
(1.85 MB 1440x960 catbox_1q6xxy.png)

(2.08 MB 960x1440 catbox_7u7dla.png)

(1.82 MB 1440x960 catbox_sfixv1.png)

(2.05 MB 1440x960 catbox_a4qp6i.png)

Mirrors, how do they work
>>13195 Thanks. I hate it.
(2.10 MB 1024x1536 catbox_0h9wj0.png)

(2.27 MB 1024x1536 catbox_5qxcol.png)

(2.23 MB 1024x1536 catbox_rss04j.png)

(2.19 MB 1024x1536 catbox_rmz80r.png)

(1.30 MB 960x1152 catbox_gdxy3f.png)

(1.34 MB 960x1152 catbox_auj24z.png)

(1.72 MB 1152x1152 catbox_718k9z.png)

(1.50 MB 960x1152 catbox_hr0you.png)

I got a comment on pixiv telling me to upload Shinobu to civitai so nobody steals credit for the LoRA. I'll leave it up to whoever tries to steal it whether or not they feel like getting permabanned. Also, have an ordinary witch doing ordinary witch things.
>>13201 that apple looks bad. DUDUDUDUDUDDUDUDUDUDUD DUDUDUDUDU
>>13192 I just tried and it did not play nice. Probably a reason why
haven't seen a takamichi lora yet
>>13201 Has this happened before? I've been lucky and none of my lora's have been uploaded anywhere else that i didnt want to.
>>13202 From my experience when doing the daily pixiv daily theme's getting actually normal sized and decent looking fruits to generate was a pain in the ass
(1.18 MB 960x1152 catbox_lmgf66.png)

(1.33 MB 960x1152 catbox_cbq53f.png)

(1.52 MB 960x1152 catbox_gebhb8.png)

(1.29 MB 1152x1152 catbox_qvoh55.png)

>>13205 People on the touhou AI discord have definitely had their LoRAs re-uploaded without consent a few times, can't remember if I've seen it happen here or on /h/. >>13206 I do that apple prompt a lot with different LoRAs and settings specifically because the AI does funny stuff with it. The one I usually get is the apple being volleyball sized and the character holding it with two hands, this is the first time it gave me true horror.
>>13205 people on civitai steal LoRAs from 4chin and that discord all the time
>>13208 >>13207 Thats scummy af ngl
>>13209 Having said that idk how we feel about remaking from scratch stuff that already exist and uploading it to civitai, cause ive seen some shit on both sites that look horrible or could be improved but i feel like uploading it to civitai id it already exists its kinda being sn attention seeker
>>13210 shamelessly done it myself, shun + shunny was one such case, there is already both a shun and shunny lora, granted neither were together, or good, so perhaps them being together in one lora is enough to call it different?
>>13210 I'd think too much about it. Just look at genshin lora. I bet there's like 10 raiden and 11 Hutao LoRa.
>>13209 ngl, thats kind of why I never posted my stuff on /hdg/ even if it was for help or critiques. Also, not sure if its the same or not, but the touhou/WD discord also has a channel for Easy Scripts
>>13214 yeah, that's because I made one for it. Thought people there would want to make use of it, and it gives people a direct line to me in case they have questions/technical problems
>>13215 Baka man says Massive W
(2.11 MB 1024x1536 catbox_extvg5.png)

(2.00 MB 1024x1536 catbox_38nn8c.png)

(2.07 MB 1024x1536 catbox_07h0sa.png)

(2.06 MB 1024x1536 catbox_mi2yxt.png)

>>13216 ah yes "baka man" lel other news, work on the wildcards is coming along very well. able to make much more expressive character designs through wildcards now. this character here is one example of that. still have much more work to do though.
>>13215 ah ok, then that's alright
(2.01 MB 1024x1536 catbox_wihdjx.png)

(2.05 MB 1024x1536 catbox_c8ovt7.png)

(1.83 MB 1024x1536 catbox_0a04ds.png)

(1.90 MB 1024x1536 catbox_xp8o9h.png)

>>13218 yeah, I may have made the scripts and everything for the 4/8chan board, but the scripts are easy enough that most people can grasp how to use them. I just felt like having a wider reach would help more than hinder you know?
>>13219 I was just concerned that they were leached without your knowledge since everything else gets stolen. But if you did it for them then it’s all good.
>>13220 Fair enough! Thanks for being concerned!
(2.17 MB 1440x960 catbox_y6pu44.png)

Huh didn't expect heart pupils
I mean, if getting stolen is your concern. Most of your Loras are made by "stealing" something anyway. It feels hypocritical now to have this concern now the same thing happens to you.
>>13223 Soon we'll be going into the koikatsu meta where people will steal each others cards and sell them like they made them, patreons for creators, paywalls and all that jazz.
>>13224 Probably; gotta enjoy these few months of through dick, unity I guess. I mean there's some patreon LoRa on civitai and all that early access jazz.
>>13205 >>13208 Yeah had one dude from /h/ yoink mine after asking me to make one, at least I got credit for it as some random anon in the description but he just used basically my readme, sample images and everything for it. >>13223 I don't even know if stolen is the right word and yeah KK parallels aside, I put the stuff on a mega so I'm not claiming it necessarily but it feels like a slap. Naivety is what I'd call mine since I'd been fine with through dick unity, still am but I'm being choosier in what I upload now.
>>13195 >darkmode i've always heard about this embedding but never saw it uploaded anywhere
>>13226 If you care about the clout that it will bother you if it gets stolen. You can just upload it to the most mainstream media aka civitai as long as you're willing to bend according to the mod's will. Or hugginface if you're less willing.  There's always stealing/pirating/plundering on the internet, but more often than not, simply sharing outweighs the former when you consider something like >>12281. Someone cums on your lora or gen proving that your skill is adequate enough for making anon experience the pleasure of cumming inside.
>>13206 Sorry anon, I was drunk and I was going through my bad apple remixes
(1.50 MB 1024x1536 catbox_bu3auc.png)

(1.29 MB 896x1280 catbox_n6m2gh.png)

(1.09 MB 896x1280 catbox_zr9sju.png)

(1.06 MB 896x1280 catbox_ywokwq.png)

>>12830 I like how it interprets traditional asian garments
fucking ESLs, what the fuck does this mean?
>>13231 >No it does >Could’ve mean “No it doesn’t” >Check spec sheet If I wrote an email like that at my old job I would’ve been yelled at by 8 different people office space style.
(1.93 MB 1024x1536 00138-3359056743.png)

(1.74 MB 1024x1536 00148-1388779628.png)

(1.63 MB 1024x1536 00003-382779544.png)

(2.05 MB 1024x1536 00334-274364282.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg gil elvgren added >>13230 a lot of his dataset was that so makes sense
>>13231 what the hell type of support is this? How long did you even wait for this Nigerian prince tier response?
>>13234 four work days (sent the ticket on sunday)
>>13233 anon i'm still waiting for komekko... :(
>>13236 ngl i kinda forgot lol. i'll do her after i do the next style lora i finish. but for now vidya
>>13235 holy kek, but yeah anon SLI support for motherboards is pretty much dead for future processes including intel's next gen, just get the best AM4 chip and if you have more money than me EVGA dark so you can have the 4 bridge NVLINK slot support if you want to run dual 3090s that way, or you can just run them without NVLINK like I am right now but that's because I'm just looking for a good deal for a 3 slot bridge
>>13238 i think the crosshair hero or whatever is one of the few AM5 boards that supports nvlink but out of the two versions the nearly 1000€ one is the one that supports it, the 650€ one doesn't iirc
>>13238 just gonna stick with the "build a dual 3090 am4 rig whenever i can get a used 4090/ti for less than 1k" plan
>>13231 it means you've made the mistake of purchasing an ASUS product, their atrocious customer service hasn't changed in 9 years I can see
>have 4090 >but now I want A400s for training turn back, this is hell you're walking into
>>13226 i used people's art in my datasets. regardless of morals this is an undeniable fact. if i considered reuploads of my models stealing i would just quit. watching numbers go up and attaching them to a brand is a sort of pattern matching. wanting them to go up faster is a plague that affects any pastime that grows too big. at this point i'm inclined to believe it's the human condition.
this talk about reuploading is going to make me upload based66 to mega first for you guys then civitai, I used to keep it just anonymous but I want to see numbers like a vtuber
>>13244 >/#/fagging in my porn hideout AHHHHHHHHHHHHHHHH
>>13245 The numbers are important peko
>>13242 If I make money in any way from this I wouldn't mind getting that deep into hell otherwise for now I'm satisfied with my current setup until a consumer gamer card has 48gbs native
>>13241 don't think i had much choice tbh, anus seems to have decent x670e motherboards while gigashit and assrock have the worst heard MSI ones are decent but the prices are fucked, 900€ for the ACE and 1400-1500€ for the godlike
this is useful if you download stuff from Civitai https://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper
(2.05 MB 1024x1536 catbox_4ds9h2.png)

(1.84 MB 1024x1536 catbox_afoxy9.png)

(1.81 MB 1024x1536 catbox_wswvy2.png)

(2.11 MB 1024x1536 catbox_6znwxj.png)

>>13249 honestly forgot that was a thing, I've had it for a while now but I keep forgetting to actually use it
Based66 progress is good so far, I'm using MBW this time and I'm hoping the final recipe will make it easier to do sex and LORAs right away like Based64
>>13251 are you incorporating hll4?
>>13252 yes, p2
>>13253 It generates weird shadows and blurry edges. Wait for p3
>>13254 damn alright thanks anon
>>13254 goddammit based66 is gonna be the new wd 1.4
what is the point of being able to hide users' uploads on civitai when it doesn't actually hide them
>latent and esrgananimewithareallylongname look the best but they consistently add unwanted details or transform pre-existing details into something i don't want what do?
>>13195 Don't worry anon I respect your taste and think these gens are good. >>13227 It's in the hdgfaq, seems like it's something that never made it's way into the gayshit repo because that's more focused on LoRAs whereas this is an embed. https://rentry.org/hdgfaq#blacked-embed There's also a finetune that was made recently on more content, seems like it gives better results. https://pixeldrain.com/l/C81rS9xh https://pixeldrain.com/l/LRjAJHsr >>13258 Do you mean they're being unlisted? What are they hiding now?
>>13260 Absolutely fucking disgusting, if you're gonna post this trash you belong on 4chan's /hdg/ or some equally pozzed d*scord and not here.
>>13248 I'm satisfied with my MSI Mortar but will probably need to replace it if I ever get my hands on one of the quad-slot GPUs
>>13260 >Do you mean they're being unlisted? What are they hiding now? No, they added a feature to blacklist tags and users but if you blacklist something it doesn't actually hide it from the front page, it just grays out the tile and you can still see and click it lmao.
>>13249 I use a different version that's compatible with additional networks metadata and copies all the API responses offline https://github.com/space-nuko/civitai-local-database
>>13263 once you reload the page they actually disappear though
>>13262 lmao even b650s are ~300€ average, absolutely fucked people were memeing about ddr5 prices but the real meme is am5 prices
>>13263 Ah that, I assume it's something that could be done with a css rule too to completely remove it. I've only blocked a few users and most don't seem to upload much anyway so I never noticed it.
>>13266 It was an am4. Cost $122. I use it with a 3090.
(4.44 MB 1440x2399 firefox_jJGvYxILK1.png)

>>13265 even after re-loading the site it's still like that... >>13267 I don't care enough to bother it's just funny that they added a feature which doesn't do what it's supposed to
>>13269 that's odd, it's how it is for me. might be related to the browser you are using? I'm on firefox and have no issues with it
>>13269 Oh that's the same user I blocked too lmao. When I initially blocked them it looked like that but I don't see their stuff at all now yeah. Might need to clear your cache or something. I think it has to do with how the site works where a lot of it is dynamic and not served as a static page.
>>13251 I wonder what hll anon is doing differently that LoRA's don't play nice. I know he has deviated his training that he posted here from what he mentioned on /vtai/ and has noted LoRAs weren't working that well. Could be why the recent hll-mixed based6x models weren't working that well? Which btw, do you mind sharing how you've done your mixing so far?
>>13272 I'm redoing it from scratch again because part 6 was bad with my LORA example, it's definitely refslave v2 doing some fuckery
>>13273 ah yea, the weird finetune model that we have no clue what and how it was trained.
>>13274 I found this online, sadly the formatting is fucked so it's confusing to read **Model Merging Recipe \[From Original Documentation\]** **Tertiary Bases (First stage, used to make the secondary bases)** **➤ Tertiary Base 1 (Normal Merging)** Merge Interpolation Method Primary Model (A) Secondary Model (B) Tertiary Model Output Model 1 Weighted sum @ 0.5 AOM3A2 Counterfeit-V2.5 N/A RefSlave-V2-Bronze1 2 Weighted sum @ 0.3 RefSlave-V2-Bronze1 pastelmix-better-vae-fp16 N/A RefSlave-V2-Bronze2 3 Weighted sum @ 0.2 RefSlave-V2-Bronze2 Ultracolor.v4 N/A RefSlave-V2-Bronze3 4 Add Difference @ 0.9 RefSlave-V1 RefSlave-V2-Bronze3 EasyNegative **RefSlave-V2-SilverA** **➤ Tertiary Base 2 (Normal Merging)** Merge Interpolation Method Primary Model (A) Secondary Model (B) Tertiary Model Output Model 5 Weighted sum @ 0.55 pastelmix-better-vae-fp16 RefSlave-V2-Bronze1 N/A RefSlave-V2-Bronze2B 6 Add Difference @ 0.8 Ultracolor.v4 RefSlave-V2-Bronze2B EasyNegative RefSlave-V2-Bronze3B 7 Add Difference @ 0.4 RefSlave-V2-Bronze3B AOM3A1 EasyNegative **RefSlave-V2-SilverB** **Secondary Bases (Second stage, used to make the primary base)** **➤ Secondary Base (Block Weight Merging)** Merge Primary Model (A) Secondary Model (B) base\_alpha Weight Values Output Model 8 RefSlave-V2-Silver-A RefSlave-V2-Silver-B 0 1,1,1,1,0.75,0.5,0.33,0.2,0.1,0,0,0,0,0.15,0.15,0.33,0.5,0.6,0.75,1,1,1,1,1,1 RefSlave-V2-Silver-AB 9 Counterfeit-V2.5 RefSlave-V2-Silver-AB 0 1,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1 **RefSlave-V2-Gold** **Final Merge (Third stage, merging the primary base)** **➤ Primary Base (Block Weight Merging)** Merge Primary Model (A) Secondary Model (B) base\_alpha Weight Values Output Model 10 AOM3A2 RefSlave-V2-Gold 1 1,1,1,1,0.75,0.5,0.33,0.2,0.1,0,0,0,0,0.15,0.15,0.33,0.5,0.6,0.75,1,1,1,1 **Model Sources \[From Original Documentation\]** AOM3A1 [Repository](https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix3/AOM3A1.safetensors) AOM3A2 [Repository](https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix3/AOM3A2.safetensors) AOM3A3 [Repository](https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix3/AOM3A3.safetensors) Counterfeit-V2.5 [Repository](https://huggingface.co/gsdf/Counterfeit-V2.5/blob/main/Counterfeit-V2.5.safetensors) pastelmix-better-vae-fp16 [Repository](https://huggingface.co/andite/pastel-mix/blob/main/pastelmix-better-vae-fp16.safetensors) Ultracolor.v4 [Repository](https://huggingface.co/xdive/ultracolor.v4/blob/main/Ultracolor.v4.ckpt)
>>13275 NTA but that's for some other refslave variant iirc, not the original V2 (which didn't have AOM3 mixed in)
>I am growing stronger >>13275 >>13276 What a fucking shame that this info will just disappear because of some random buttmad going on in sites like Civitai and such. I ordered stuff for a small server but that won't be coming in for a while, I could join in on the scraping for data preservation or try to host a small site with mixing information.
>>13277 Correct me if I'm wrong but can't modern NAS enclosures act like their own standalone mini-servers? I wanted one but I spent way too much on storage already so I got a cheap qnap DAS/LAS instead, my router supports some weird onboard NAS mode so I'll experiment with that
(2.28 MB 1536x1536 catbox_dtxjxh.png)

(2.25 MB 1536x1536 catbox_ju65yz.png)

>>13260 As a son of cain, it's kinda hard to find art where the dicks aren't white as snow but without all the cringe BNWO/Spade shit. Is there a link to the more recent finetune?
>>13279 >just admits he's a kike who's into nigger cuckshit Bold move, canaanite.
(457.23 KB 3072x3072 catbox_1bvhhf.jpg)

(1.95 MB 1536x1536 catbox_gzm4bv.png)

>>13279 oh my mistake, there's a link right there.
(2.21 MB 1536x1536 catbox_rxjw70.png)

(2.40 MB 1536x1536 catbox_x9b5oq.png)

(2.37 MB 1536x1536 catbox_5jt3wr.png)

>>13280 if you don't like it, you can just make some white dicks. I've been moving between Based64 and Based65.seems like there's no "right" answer to making gens for now. I've noticed 65 keeps giving Reisen human ears for some reason
>>13278 Depends on hardware and if your chipset will play nice with VM environments, but most NAS devices on the market are Intel focus so they can be set up with a Plex or Jellyfin VN to stream content that can be accessed from other devices. So I don't see why you should run into problems by using something else as the main VM instead of a streaming machine. I went with an AMD focus machine for better power, run several VMs and have ZFS as my main file system.
>>13283 This is your own mix right? those magazine covers are ultra convincing
>>13284 Yea, using my ufotable finetune as a base with ExpMixLine V2 and then did a 0.5 merge with a very old schizo mix called Elden Yume Noodle. I am still not sure I ever got that mix right, several recipes were floating around for the base Noodle model. I kind of understand now why Hll anon went out of his way to encourage mixing his finetune with other models, it just bring the quality better on the first add difference. I still have to do lots of tag clean up on the base dataset, hyper focus on stuff like poses, clothing, "holding sword/weapon/gun/etc" with outside data since I still get some weird things like mutated swords, or weapon grips but no weapon like in that third image. Also still get some hard overfit in some situations, harder to tweak on finetunes than on LoRAs. I'm hoping after 2 more training sessions I can say I have something stable.
(2.00 MB 1536x1536 catbox_g8yovv.png)

>>13260 The blacked64 mix is much more cooperative with artstyle/position loras compared to how quickly the embedding degraded the image. Definitely a gemerald
(2.07 MB 1024x1536 catbox_xvmhqa.png)

(2.09 MB 1024x1536 catbox_vzo6od.png)

(2.15 MB 1024x1536 catbox_zo5bld.png)

(1.99 MB 1024x1536 catbox_161aul.png)

>>13288 it's over... *tea kettle cries*
updates for sd-scripts 31 Mar. 2023, 2023/3/31: Fix an issue that the VRAM usage temporarily increases when loading a model in train_network.py. Fix an issue that an error occurs when loading a .safetensors model in train_network.py. #354 train_network.py でモデル読み込み時にVRAM 2023/3/30: Support P+ training. Thank you jakaline-dev! See #327 for details. Use train_textual_inversion_XTI.py for training. The usage is almost the same as train_textual_inversion.py. However, sample image generation during training is not supported. Use gen_img_diffusers.py for image generation (I think Web UI is not supported). Specify the embedding with --XTI_embeddings option.
>>13288 >>13289 Any hope she just joins another corpo?
>>13291 yeah, Vwhorejo most likely because of Kson. Give it a few months before giving up hope
>>13292 >yeah, Vwhorejo most likely because of Kson. kson is a fucking snake and i wouldn't be surprised if she got her to join vwhorejo and set up an onlyfans god i don't want that to happen, anything but that
>>13292 >>13293 pain peko...
What about Zaion? Wasn't she going to stream under a new persona?
Welp princess connect global just died i dont feel like ending the priconne lora collection anymore, fuck crunchyroll.
>>13295 Not sure given she is still sending messages on her past life vtuber discord server Sayu Okami
>>13287 It's actually insane how legible these are. NAI is super primitive in hindsight
>>13296 do it for the fallen ones anon How would we generate things like the little lyrical girls having to sell their bodies to old men to pay the bills after the game ended service?
>>13277 What merging extensions does everyone use? I could submit a flurry of PRs to embed the hashes/recipe for each mixed model if they save it as .safetensors
(1.60 MB 960x1440 catbox_dweb2y.png)

(1.60 MB 960x1440 catbox_m5la0d.png)

(1.54 MB 960x1440 catbox_99fefm.png)

>>13288 >Mizumizuni girl when no cock >Mizumizuni girl when not just swallowing giant cock
>>13301 this is extremely hot
(1.42 MB 960x1440 catbox_rcv3ov.png)

(1.80 MB 960x1440 catbox_p74qlh.png)

(1.43 MB 960x1440 catbox_4bnc9w.png)

(1.76 MB 960x1440 catbox_2sukds.png)

>>13302 Have some more
>>13300 I'm not good at it yet but I use the merge block GUI and the standard merge https://github.com/bbc-mc/sdweb-merge-block-weighted-gui.git There are some old models I still have recipes for that I'd like to go back and fix up
>>13304 These are insanely good. especially the knife. have you shared your novel?
(1.56 MB 960x1440 catbox_lvcga8.png)

>>13306 Gotta make it RGB mode now
(1.73 MB 1024x1536 catbox_7d2e2g.png)

today in retard news: loras in the prompt at zero power still affect gen speed pretty dramatically
>>13290 oh, cool. does voldy support xti yet?
takamichi model came out alright. wonder if the 30 comic LO scans I meticulously detexted were of any use. plus there's ~150 pages I had to crop by hand. anyway here is the dataset so nobody else will never have to do this again. LO cover edits: https://files.catbox.moe/gpwdim.7z crops: https://files.catbox.moe/7jf0sz.7z https://files.catbox.moe/7mdxb3.7z
>>13305 Novel... You mean model? No not yet, this thing is far from done. These latest images I've been sharing have honestly been a recent fluke because of a new model merge I tried. Still got to finish tagging up the main model dataset as that's the magic for why even with a model merge, these images look decent.
>>13312 needs some more marajuanal
>>13312 diona hitting the beer bong
>>13310 this looks fantastic anon, will give it some mileage tomorrow
Priconne (EN) is ending service so someone please make a Kasumi lora to remember the game by
>>13318 give me a dataset and i will cause i was finishing chloe and got fucked in the ass by cr announcement
>>13301 i'm still sad about pikamee BUT GIVE ME THAT MIZUMIZUNI LORA RIGHT NOW
(1.30 MB 960x1440 catbox_vb5cr6.png)

(1.56 MB 960x1440 catbox_8a2vge.png)

(1.50 MB 1440x960 catbox_xivc5l.png)

>>13323 Lucky for you I happened to grab the latest version off civitai before it was taken down https://civitai.com/models/17274/mizumizuni-deepthroat-concepts https://files.catbox.moe/9nugyz.safetensors Unfortunate because of all the existing Mizumizuni loras it's easily the best
>>13325 i'm gonna need the girls und cunny and fizrot ones too, thanks
>>13308 holy fuck, BREED i can't seem to get the same style as everyone else when using b64v3 + alkemanubis @0.4 and ushiyama ame @0.8, no idea why
(1.26 MB 1440x960 catbox_mfcqwa.png)

>>13326 GuP: https://civitai.com/models/19037/girls-und-panzer fiz-rot: https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/file/yBd2lRxS Fiz-rot one I just uploaded the new version of and haven't tested much yet. Though I've trained a few other loras at the same settings and they've all been good. Also might as well post the original mizumizuni lora just in case but it's pretty similar to the newer version they uploaded: https://files.catbox.moe/afophj.safetensors
>>13328 based, thanks time for style-accurate anchovy seggs
where was based64v3 uploaded again?
>try out a gun LoRA >its fucking shit >doesn't disclose keywords >"trigger discipline" broke the placement of the hand on the gun oh for fuck sakes, never trust an artist to know how to work a gun or know its terminology.
(1.73 MB 1024x1536 catbox_ejla2c.png)

my dog keeps sitting by my chair and giving me this stupid fucking look
>>13334 Take her to the doctor, her tongue shouldn't look like that. Probably explains the shitting.
>>13333 man concept LoRAs on civitai are either fucked or they work but its heavy gacha, not having access to the dataset or the trigger works makes shit even worse
I knkw this board doesn't really see much of a reason to bake loha, but I'm currently trying to see if I can find a training setup for loha that works as a good base so at the very least, if people make loha they have some decent base setup to go off of. Has anybody managed to train a high quality loha? And if so can I get a json of you setup or an explanation of your methodology? From what I know, loha don't really have much use beyond very large and many concepted datasets but it might be able to be used to create smaller sized but accurate styles if done right. I still think that low-ish dim locon works fine for style, but I was wondering if, with cp decomposition, we could stuff more data into the same space as a dim16 conv dim8 locon
>>13337 I just used d-adaptation when testing out LoHAs and it worked fine.
>>13324 Catbox pls? where is this style coming from?
>>13339 im pretty sure the LoRA is in the thread fren
>>13325 wonder why it was taken down. esp because civitai doesn't take down much
>>13341 Civitai has been on a spree lately anon
>>13342 all the major public AI sites are starting to censor a lot of things now, hoping that some anons are archiving everything
>>13343 An anon here has been scraping Civitai, there is also the motherload rentry on /g/ and I think gayshit has some because others stole them from /h/ and reupped them to Civitai.
is WD1.4 still the best automatic tagger or has something better come along?
>>13344 I only scrape the metadata so far, I don't have the HDD space for all the models yet, plus I've had two hard drive failures in the past month so I lost a bunch of shit I think we need something like ipfs, didn't some hentai general on 4ch set that up for their DLsite rips?
friend asks if there's a lora for Megumi Kohayakawa (the pop 'n music artist)
>>13327 yeah i seem to have a different version of ushiyama ame than the one used in the fox cunny and gwen pics
>>13340 actually you're only half right, I also used another lora from a few months ago in those https://files.catbox.moe/1siede.safetensors its based on the work of a twitter artist named "focus", adding it with weight 0.4 and the "(surreal)" tag seems to make just about anything crazier https://files.catbox.moe/lj7x12.png https://files.catbox.moe/rox8to.png https://files.catbox.moe/gobl7j.png https://files.catbox.moe/w5lvew.png he does some furry (?) pictures mostly revolving around animal crossing which makes humanoids look goofier. also it seems to add a lot of cool drop shadows. other things I tried: - deliberately putting "worst quality, bad quality" at the start of the prompt and scheduling it into "masterpiece" with prompt editing, or omitting all quality tags - spamming magic prompts with the dynamic-prompts extension to see what it comes up with - using 10 artist wildcards for the heck of it, seems like alot of people forgot how strong they can affect the prompt with the lora craze and all
oh and for this one i used a thomas kinkade prompt as a base kek https://files.catbox.moe/lgtbbp.png
>>13346 /hgg2d/, yeah. really wonder why it never caught on more widely.
>>13350 >that url anon...
(9.70 MB 6372x1656 00342-417744577.png)

Hmmm
>>13349 sd's artist knowledge has gotten bleached out of newer models pretty thoroughly
(1.74 MB 1440x960 catbox_lxn72k.png)

(1.68 MB 960x1440 catbox_5ih68g.png)

>>13353 Huh that's looking nice, did you train the new one? I'd be interested to see the difference in dataset and training
Just ran into an issue where I wanted to make an inpaint model of my mix and got an error when loading it. After doing some experiments, it seems that if you try to add difference the inpainting model into a model that was already merged through Add difference or had a model mixed made through Add Difference, SD kicks back a size shape mismatch error and wont load it. I tested my Ufotable finetune + ExpLine model made through add merging out the NAI-full model which failed to load, and I tested an Elden Ring model I mentioned above which had a model made with Add Difference mixed in and it also failed. I wanted to see if maybe through a block merge, a new method or a round about method could be discovered to make inpaint models, and while I have read couple block merge rentries, the stuff explaining the UNets hasn't exactly clicked with me yet, so maybe someone else can chime in for assistance?
>>13355 Cute
(1.22 MB 3072x4224 68052.jpg)

M3bros?
>>13310 i was actually in the process of doing this slowly with like a 240 image colletion of comic lo covers. thanks for saving me some work
>find lora that I was putting off searching >check info >trained with AOM2 as it's base I'm starting to get the feeling that I need to train these things myself.
i know nobody cares but NAI introduced a bunch of controlnet-related shit all at once plus upscaling and i genuinely thought it was a fucking april's fools joke thanks to picrel god that'd have been so fucking funny
>>13361 kek Regardless if Aprils Fools or not, I only really care about NAI if shit gets leaked.
>>13362 i just want their new samplers tbh, the outputs are surprisingly really clean but essentially worthless without a good model to use them on
>>13363 If I could I would try to lie my way into a job with them just so I could play with their toys kek
>>13364 "play with their toys" i too would "play with their toys" (leak the new samplers at the very least)
BasedMix anon, what extension do you use for your model mixing? I tried out using SuperMerger and the gen preview stops working after a couple of attempts.
>>13366 yeah I just use supermerger
>>13338 I still have yet to get d-adaption to work for me. Like, I just can't seem to get it to do what I want. I dunno if there is a specific setup I should know about but don't or whatever, but I haven't gotten it to work, in any case...
(1.23 MB 944x1416 04877-1259840649.png)

(1.69 MB 1024x1408 04997-316000202.png)

>>13355 Yes I forgot that there was already 2 on gitgud. I think possummachine-last is better in some cases but mine gets his style very consistently from what I tested. I'll be interested to see people try it out Added to https://mega.nz/folder/6rwS2TCY#k_UZxBhtwERhApT20Cr0NQ/folder/KjwH1D4K
I thought I knew how to select datasets from all the shit I learned in grad school, but apparently not. This shit is actual voodoo magic. I literally just replaced 3 pics of my dataset of 90 and now I'm getting consistently better results with everything else being the same. Btw, these pics are not cherry-picked. I literally took the first images that I got from these particular prompts
>>13359 A lot of the LO covers were reprinted with unedited versions in takamichi's artbooks, which saved a lot of cleanup work, I think it was like ~40 covers that weren't published elsewhere and ~30 that I could salvage with lama-cleaner
(1.97 MB 1024x1536 catbox_u7oobj.png)

(2.08 MB 1024x1536 catbox_nt9b7k.png)

(1.95 MB 1024x1536 catbox_txq0v4.png)

(1.87 MB 1024x1536 catbox_gsfp5o.png)

been a bit since I posted here, didn't really have much time to post. but a friend of mine asked for shunny + drill hair, it was worth genning, despite the dress being pretty fucky, I also opted not to tag the halo as to not have to deal with halo gacha this time
(1.86 MB 1024x1536 catbox_it76zs.png)

(1.96 MB 1024x1536 catbox_0610o0.png)

(1.96 MB 1024x1536 catbox_2flybg.png)

(1.99 MB 1024x1536 catbox_7q31m6.png)

>>13372 shunny sex
>>13373 Was debating if I make a nilou lora myself. I like her but looking at your stuff it seems almost impossible to separate her from her stupid headdress.
>>13375 it's theoretically possible, but there's so few fanart of her without the headdress. The best you can do rn is put (headdress, horns, veil:1.4) in the negative prompt
(2.03 MB 1152x1792 catbox_i7cyfd.png)

I did lots of experimenting these past few days with model mixes and its kind of scary how it only takes just random fuck it moments to produce huge jumps in quality. The autism is finally starting to pay off in the visual department, although the anatomy and hand positions/poses still need to be ironed out, and of course to maintain consistency across as many varieties of prompts, I say as I continue to spam seiba alter.
>>13376 Nvm somethings are better the way they are. Somehow looks worse. Need to find something to get together and make this weekend lora wise, I'm dry on what I want.
>>13378 I want that model or prompts to gen cool action like that when you're done. I've tried making some action scenes and they don't generate well or at all.
>>13380 It's mostly the finetune data carrying the show, but the "action" prompts I'm using are >fighting stance >dynamic pose As cool as the model is, this shit still breaks in comical ways that on other mainstream models this would never do kek. I'm still doing a couple more trainings for additional experiments and cleaning up before it ever can come close to a "beta" release. Screen cap based training is a fucking nightmare because autotaggers are not trained to be used for that kind of source.
>>13381 Makes me curious if you could make some sort of meme "RIP AND TEAR" type of violence lora now, looks like you can just see what it does.
>>13382 for the rip and tearing, the prompts i'm using are >torn dress >broken armor >cracked skin It's doable, here is the Elden Ring AnyV3 based model I merged, using the same prompt as the above set of 4 (granted that prompt is definitely not meant for such an old model). Truly an RTX On/off moment. And that model uses a similar merge of models to AOM1.
>>13383 (me) >Elden Ring AnyV3 based model I merged [IN]*** excuse my retardation
(3.06 MB 1433x2150 catbox_whgbg2.png)

hey guys welcome to orters grab any seat i'll be right with you~
after a day of testing merges I have concluded that I don't like the current look of HLL4's betas. I am hoping the final version will look as good as HLL3 but for now given I have put a lot of tests mixes for based 66 I might as well make a version with hll3 and share it here. There's a few new anime models I want to get working with the mix for a more sharper but still pastel anime look
>>13387 but it's been a few days since I made my last LORA too but I noticed the loss rate of a LORA is a lot more lower when using a min_snr gamma value of 5, I'm wondering if Loss rate actually means something good or bad now with all these developments
>>13387 That's a shame. Well I suppose don't fix what ain't broken and stick with what works.
>>13389 hll3 just has more of a nice basic anime look that makes it really nice for initial mixing recipes however hll4 is kinda of weird in the early betas it just has a lot of weird shadowing and line drawings, I'm not sure what settings or training images could have given it the "I mixed an anime model with a realistic 1.5 model look" when using it as a starting recipe compared to hll3. It's not really an insult I'm just surprised how it doesn't have the simple anime fanart look that hll3 has but I'll believe in the final version to fix a lot of the issues mentioned earlier/
>>13390 From what I read on /vt/, he changed his training settings. He is using EveryDream2 to finetune instead of Kohya to take advantage of it's multiply balancer mechanic for repeating certain parts of the dataset (think of it like LoRA concept folders but in a txt script) and he is running a double training where he runs the first set at a really high batchsize and then resumes at a lower. He said the high size training improved background detail, don't remember if he specifically mentions that the lower size improved subject detail. He mentioned some other details that aren't coming to me but a quick warosu archive search will answer that. To give you as close of an apples to apples comparison, I'm using his settings her used on Kohya for Hll3.1 he shared here with a comparable size dataset on my finetune. Without his help, I would not have been able to make my own model so I am in no way dissing his latest work, I'm just showing that his work... works. First image is from my previous training, second image is my current finetune, third has ExpLineMix merged, and the last is the previous merged but merged with the Any3Mix variant I mentioned above. My current training introduced a lot of outsider data yet it still maintains a good look. So to me, it does not seem to be the images themselves. Only reason why he would change is settings is if he read up something or talked to some people and decided to experiment, which there is nothing wrong with that. The downside is that these models take 36-40 hours to train on a 4090 so you have to be really confident you did your prepwork correctly to justify that waiting time and be happy with the end result. Even if you spaced it out with save/resumes, it's time you could've spent doing something else. One last thing that can also be a reality, based on my own experience, is that if you are only focused on training models and doing basic prompt testing, you don't really have the time to do LoRA compatibility and model mixing testing yourself so stuff like that will fall through the cracks when eventually other people try that stuff out on your model only to see shit doesn't work correctly.
Since the ez scripts anon is here a lot I'm hoping to flag you. Far as I've read and I might've missed it what do the weights on lora merge do. When I merge loras is it to make a 1 out of the total, like .6 of v1 and .4 of v2 or can I just slam both at 1 and is that even better?
>>13392 basically, the values from 0 -> 1 are the percentage that gets added, lora merging is just simply put adding the weights together. so if you merge 3 lora together, at 0.4, 0.6, and 0.8, then you would get a lora that is exactly that, which totals to 1.8, or roughly 2x the weight size of any individually, that's why you want to try and keep the total merge close to 1, because it will put too much into the weights and effectively brick it
>>13379 training a holding sword lora failed miserably. Any advice?
>>13394 Based weapon enthusiast. I hope your efforts bear fruit and this will actually be possible.
>>13394 >>13394 This might sound crazy, but try making new tags in the name of how the sword or weapon is held or the type of stance that would lead to a weapon being held in a specific manner. With “holding gun”, if you don’t include “trigger discipline” you have to roll the gacha for all the ways the model was trained on images with “holding gun” but “trigger discipline” has lots of images holding the gun up front or in an “aim and fire” stance. You may need some more training data, but maybe it can work? You are basically teaching the mode an entire new trick.
>>13396 yeah, my next test is to only gather images with the unsheathing stance and training it on that
>>13397 Yea, working on one stance at the time would also help. Looking forward to your success.
>>13393 I was gonna test it but then the thing broke and it just returns >file instead of a safetensor for some reason. It worked yesterday but broke itself today somehow or from last time I used it.
(3.33 MB 1433x2150 catbox_c9ela0.png)

what do you want, dork, i'm working
(2.13 MB 1024x1536 catbox_56g3jo.png)

this is horrible but i have to post it
>>13401 i keked
>>13399 Uhhh, I'll look into it, I've got work so it'll be a few hours but that sounds like I fucked up somewhere
Has anybody played with the min_snr_gamma yet? If so, what are some finding people have found? I'm trying to get a bit of a concensus on it so I can more concretely give a solid explanation of its effects on the Readme. I've only baked one thing with it, so I don't really have a ton to go off of on my own, though somebody I've talked to says that it reduces training time.
>>13405 not ton to go off of either because my LORAs always avoid frying but I noticed a lower loss rate while training if that means anything
>>12955 >>12958 gura bros, how do you get the small twintails without it turning into miku. Is my lora too weak?
>>13406 Hmm, that unfortunately really isn't much to go off of, at least I know it actually is working. I am always trying to sanity check my code, especially when it comes to the easy scripts, so even that is a good indication that my code isn't broken at least.
>>13407 Oh, actually... I wasn't using a lora specifically for gura there, that was actually gens using my tenroy locon, because about 70% of his art is gura, it's able to represent her very well. That locon was baked at dim16 conv dim16 Gura should be decently well known to the model already, so you shouldn't need a large dim lora for her. Might be the tags you are using? Those all are catboxed, so you might be able to use those as reference. You could also probably just neg long hair as well
>>13409 Oh also, sometimes it's not the dim size that's the issue, sometimes it's related to how long you trained it for, or just as likely, the dataset
>>13400 Love that angle.
(2.03 MB 1024x1536 based66.png)

Preview of Based66, share some feedback I used the Nilou LORA posted by our genshin anon to see if this model works well with detailed LORAs
(5.25 MB 2048x3072 based66 (2).png)

I've been trying to replicate the alkemanubis + ushiyama ame pics but I don't think I have the right lora for the latter. The one I have is named "ushiyama-ame-1272it-novae", is this the right one?
So did the torch 2.0 thing die? I haven't seen much talk about it and wanted to finally try it out.
>>13416 no it didn't die out, it's so simple to install that we're pretty much used to the 5-10% boost now
>>13417 ok, I haven't done any changes to my main folder after doing the cudnn and the first cu118 so I've been hesitant to fuck with it until now, especially with that major update that voldy pushed that broke everything.
>>13414 >>13412 Best thing I think is to toss it to the wild and get some actual feedback besides small gens. Definitive question I think is did you do same style with sex models like 64 since I like 65 a lot but it does porn and some other stuff way worse then 64. I still like 64 quite a bit as my favorite vanilla model that works with everything.
>>13418 >>13416 I tried it and it bricked me on making hi res gens in img2img with sd ultimate so I reverted back.
>>13419 I made sure that notable sex models were added into the early mix however there wasn't as many compared to 64 while 65 had more less focused sex models and its main focus was the pastel mix. Yeah I'll release this into the wild, just let me make a new LORA and test it out with the 3 Based mixes
Not sure if this would be more related to the pixiv crowd of anons, but has anyone ever thought to make bootleg versions of models that can’t be used commercially because they mixed some licensed model in their merge and thought of just replacing that model or two with a free use version and get a similar looking result? I know reverse engineering a model is difficult without the exact recipe but surely the model merger gods would have some idea to get a close enough result? An example of a model would be chillout mix which used a merge model containing Dreamlike, and the intent would be recreate that merge without dreamlike and then follow the rest of merge to bypass that restriction as well as Civitai’s ownership.
>>13422 I used deleted models in 66 but will never mention them because all the other models I put in took away their unique appearance (or made them better kek) But aside from that yes it is possible to get "bootleg" versions of those autistic models, if you're lucky the person or someone exposed the mixing recipe, all you would have to do is change the values of the MBW or one of the models within the recipe to make it your own
>>13423 Should have just saved the recipe into the model like how sd-scripts does with LoRA metadata, would have changed history with how important its becoming
>>13423 >took away the uniqueness or made them better Teach me your ways kek In regards to the chillout example, a detailed mix was never disclosed other than it was basil + 2 variants of the same realism sex model, and while the latter did include their merge list as well, the recipe was not included. I hunted down the models and tried to experiment but my shit looks like scrambled egg noodle face memes.
>>13424 yeah and we can have less cases of models needing to be archived because even if their model maker was a schizo that took their shit down we can at least rebake it ourselves. However fintuned models that get taken down are lost to time because even if a double 3090 chad or 4090 chad rebakes it with the "same" dataset just like LORA baking using the same settings and dataset will always result in something different
>>13425 you just really have to do trial and error between no usage of MBW or with different preset values for MBW values. With the based mixes I used supermerger and changed around the model placements in the recipe to see which order works best
>model licenses and copyright why are these even a thing? what the fuck? just the usual "so some kike lawyer can make some money from the future lawsuits" thing?
>Testing loras at different mix weights >Doing batch 2 or 3 for speed >1 of them will fry and the other turns out fine I don't understand this shit sometimes, like I can't tell if this really works or not since I'm getting a 50/50 on it completely frying or looking better.
>>13428 Something, something meme corpo future you don't actually own anything etc. But yeah DO NOT break the TOS no his slightly edit'd stolen model
>>13430 i find this so funny because in 99% of the cases you can't even go "well actually i own the training material" unless you either own a stock image company or you've bought a license to use stock shit
>>13428 It's really fucking stupid but if you want to go the route of making a few bucks, you really got to cover your ass to some extent or go lawful evil and do it for the sake of giving someone the legal middle finger.
>>13432 >It's really fucking stupid but if you want to go the route of making a few bucks, you really got to cover your ass to some extent or go lawful evil and do it for the sake of giving someone the legal middle finger. wym? what, are patreon and pixiv going to force me to give them my model and settings at gunpoint just to see if i own some loicense? lol
Based67 or 68 will be made once I finetune my own model or the hll4 final version looks really good, the finetuning is going to be something I dip my toes into the first time and given it's only a handful of anons doing it that's going to be something that I'll have to do a lot of trial and error for.
>>13434 stop blueballing us and call it based69 already
>>13431 >stock image company Even these guys aren't safe. Artists that have contributed to Getty Images have said they would sue them on the grounds that they were underpaid for their work when Getty is trying to sue StabilityAI for $150,000 per image x 12 million.
>>13435 Based69 will come, let it build up to that point kek
>>13436 >in the end all the outrage about AI just comes down to copyright kikery because the kikes literally cannot let anything exist if they can't exploit it for money in the long run sigh
>>13433 Nah, just that some disgruntled niggers in the future could try to contact them to force a take down on you or some shit. With the exception Civitai taking down 3 Loras per a DMCA claim (lmao), no real precedent has really been set.
>>13438 95% of the modern problems is kikes realizing they can "legally" steal money from someone over something.
>>13440 i mean, yeah
>>13427 Have you ever tried out using that autoMBW? The one that made those Silicon models through randomizing model merges and running it through a through a clip aesthetic checker?
>>13415 pretty sure yeah. are you using dynamic thresholding?
>>13422 if you merged in 1% of some locon you trained, how would anyone know? The hash is different and you can just say you merged your own shit
>>13434 Here is hll anon's finetune script settings for Kohya https://rentry.org/9fqnh Settings are for a 4090 though on a 100k+ image set. You may need to tweak it to match your card if its something else. See if maybe you could try asking hll anon for pointers on /vt/ if you can catch him, I'll try to help as well with what I know.
>>13428 I'm a freetard so I don't want corpos using the free shit I made to make money if they don't contribute back by open-sourcing their shit. Anyone else can do whatever they want tho. This is why I like viral licenses like GPL.
(924.08 KB 498x277 AQUASCREM.gif)

>>13443 am i using what now?
>>13446 ah yes, a fellow FOSS brother
>>13442 >clip aesthetic checker not sure what that is but yes I did a mix between supermerger for initial mixes to see which order worked best (because you can merge and gen a test result without saving the model) first and then I did different auto mbw tests, however that method makes me wish that auto MBW presets were added to supermerger >>13445 thanks anon, I'll be testing it out with one of my 3090s in my PC but for now I have to make sure to scrape my dataset for the finetune with Hydrus. I wish I could easily understand the method for anime screencap finetuning because it would be nice to make "dreambooth models" like the umamusume one and then size it down to a locon
(1.67 MB 1024x1536 catbox_hlstfd.png)

Who let this young boy(?) into the construction site?
>>13446 even enforcement of the GPL is dependent on having the legal resources to launch an injunction against the infringing corpo. is a lone anon going to bother running through all those steps just for a paltry sum? is it worth having the anons info reprinted in "historic first court case enforcing SD model merge licensing" headlines that are gonna get spread around?
>>13451 the funniest example i can think of is KORG violating the gpl license, spergs kicking up a shitstorm and it ending in... absolutely nothing lol
>>13452 on one end: lol get fucked on the other end: they could've gotten korg to cook up an actual fucking OS instead of the garbage in-house distro they've been using since the OASIS
>>13446 based and FOSS pilled
>>13449 Be warned, mucho autismo texto. >I wish I could easily understand the method for anime screencap finetuning It's fucking hell, pic related is a glimpse. First off is that autotagging by itself introduces lots of false positive tags in the "xBoy/xGirl" department even in images where there is no subject. I have a somewhat reliable method to at least make sure the "1girl/boy" is mostly correct and only need to go back to make sure that said subjects are the correct gender. Multiple subjects I have resorted to having to do manually and on a case by case basis (character too out of frame or too far in the background? Maybe I'm not gonna tag it). With that being said, you cannot let autotagger do all the work, you have to manually clean up thousands of images if you are using screencaps. Regular fan art, key art, poster/artbook/magazine scans, and the occasional official art piece from social media or website rips are not an issue with tagging models because they generally follow a similar look in which the models are trained on. This is the reason why Hll anon can get away with not going back and fixing huge bulks of tags unless something is just flat out wrong, because it's 95% fanart (the other 5% being official art corpo assets or their sanctioned artist work for merch or promos) and even then, more images eventually outweigh these small tagging mistakes. Now in terms of my own training, it is not optimized at all. While I am using primarily 1920x1080 resolution images, the training is done on 512x512 scale. Kohya's translated Readme doesn't give a whole lot on how you should adjust for larger images. I am not sure if I need to adjust this, hell I'm not sure if I should be downsizing my source images. But in the case I need to do the later, I have bucketed crops through the method provided in the anime pipeline for about 70% of the images waiting in the wing that just need to have minor tagging done. I just wish there was an easy way to organize parent/child relationships of images on Hydrus, or at least I haven't found the way to do it. I made a work around for the future datasets im planning to use, but my ufotable set is completely unorganized on that front. In addition unless I am mistaken, until recently, the finetune trainer did not include a built in bucketing like the LoRA trainer does. You used to have to run the latents separately to get a bucketing json. I tried doing this on my recent training and the thing ended it a shit show and lost 11 hours of training. But when I reran the training with the latest pull at the time, it gave me a bucket info window, although I still had the json for the latents in the training folder, just not the additional settings enabled in the script to actually make it go into effect. I will have to check this the next time I do a training. Um... I'm not sure what else to disclose. Ask away I guess?
>>13447 https://github.com/mcmonkeyprojects/sd-dynamic-thresholding set cfg to 15, mimic cfg 5, top percentile clamp ~92%, half cosine down and 3 for both sliders
>>13456 was about to ask if this was for gwen or the foxes but i just looked at the png info kek did you also make the 4 foxes?
>>13426 Alright PR saving merge recipes submitted, pray to heaven that auto merges it in the next few weeks https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9312
>>13458 fucking based, think you can do one for supermerger and MBW GUI?
>>13457 nope
>>13458 nice, hopefully the schizo chinks that own one of the models I used in Based66 don't come for me because I do plan on uploading it along with the other based mixes on civitai. Yet that doesn't really matter, this is helpful for the cases were we have schizo/moral guilted model mixers taking down their shit and we can just recreate their models if they delete it
>>13449 Unless I'm retarded, can't you just move the presets files from autoMBW to SuperMerger? >>13461 Pretty much, and we just get around it by ckpt merging what we don't want them to see then resave as safetensors
(1.79 MB 1536x1024 catbox_mlvenx.png)

>>13456 Interesting
>>13462 huh I wonder if you can, there is script python files that should have the configs but I don't recall their being an option to save a preset or loading one in supermerger... or maybe I'm dumb because I just manually copy and paste each value of a preset into supermerger kek
>>13464 probably just copyapasta the python details.
>>13463 it's a pretty straightforward quality enhancer another good one: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8457 a little faster than euler a, a little more detailed than regular dpm++2m. i haven't used any sampler besides it and euler a since i found it.
>>13460 just gwen then? where'd you get the ame lora? i think yours might be some older version
>>13466 just need to follow the instructions on the first post right?
>>13467 i'm like 90% sure the one i have is the one that's still in the gitgud >>13468 yep. make sure you match the existing indentation when you're inserting the new lines into samplers_k_diffusion = [
can someone please explain whats the point of using higher CFG?
>>13469 i got the one i mentioned from a mega linked a few dozen posts above, will try the gitgud. i keep forgetting that it exists
easyanon is it possible if you can make a bat/script file that works with kohya's implementation of the new version of textual inversion? >train_textual_inversion_XTI.py
>>13459 Can do that for supermerger but the author of MBW hasn't committed since January, sadly I dunno if I can get the change in his repo at this point
>>13461 The authors can choose to disable .safetensors metadata to hide their merge recipe with the current implementation, should that be an option?
>>13472 I probably can, but it'll be closer to the scripts of old, where everything is in one file
>>13469 >>13471 nvm i'm retarded, it's the same mega
>>13476 that works, I just want to start testing it out but in a more consicse form like easyscripts' jsons or args file works for LORAs baking. Otherwise my other question is using the update.bat will do every one of Kohya's updates for sd-scripts since march 31st right?
Try as I might to fight it, I always end up coming back to Poke girls.
>>13474 >>13474 I don't know if Voldy would allow scumbags like us to force people to not allow metadata opt out. I would love to see them squirm and be unable to hide it.
>>13479 I've been in a rut on what I want to make next and think I'm gonna do some poke girls.
>>13479 oh my god yellow exists
>>13481 R/S/E may please and thank you
>>13478 I should be able to get the json stuff working for that script as well, just need to figure out all of the args and make sure to process it properly
>>13482 And don't you forget her.
>>13479 same tbh. >>13482 yes one of those obscure Gen 1 pokegirls who will probably never make another appearance again.
>>13485 i've been wanting to read adventures since forever but at the same time not really? it's too shonen-y for my taste and i'd rather keep my childhood memories of pokemon intact tbh
>>13400 Any insight on how did you get that much detail specially the eyes?
>>13487 R/S was kino to me when I was younger. Loved the uptight boy / wild child girl dynamic.
>>13483 I'm for sure doing Lorelei, quality is debatable. Can't think of much else off the top gonna browse booru at poke girls. Hilda would be nice but you can prompt her in purely on inputs with most models.
>>13488 the only special tricks were >>13456 >>13466 everything else is in the catbox. just a good model and a couple of loras that work well together, didn't even inpaint.
>>13489 oh yeah i know about jungle tomboy sapphire, 10/10 but yeah from what i've seen and heard adventures gets way too dark for my taste. or at least uncharacteristically dark for the watered-down/neutered western pokemon i grew up with. i know JP 'mon is far darker with basic things like fainting being closer to "near-death" but it still feels weird lol
>>13490 hilda's hair turned into weird blobs when i prompted her on NAI way back when, gotta try her again also do THE TWINS (so i can try to pair them with lana)
>>13029 gigabased dataset hoarder
>>13494 >it's the seraziel anon no YOU'RE the gigabased one
>>13494 >>13495 you're all gigabased
>>13494 >>13495 did another version of the seraziel (+implicit holo) come out btw? need to catch up
>>13494 >>13029 Same here. I got fucked by cropping to 512x512 too many times during the TI training days that I always keep a copy of the original uncropped data now.
(1.44 MB 960x1152 recreated.png)

(1.43 MB 960x1152 original.png)

yeah yeah i know i shouldn't try to match other people's outputs but bad habits die hard and i'm one habitudinal motherfucker left is mine, right is the original there are way too many differences to be xformers' fault so what's wrong here? just hardware differences? b64v3 vs b64v3-pruned? all i did was import the image in pnginfo, send to txt2image and most importantly i disabled the "override settings" thing because jesus fucking christ i was getting near third alphabet letter sta*er results with it on for some reason
>>13499 yeah pruning and xformers can make that much of a difference
>>13495 >>13496 Thanks. Also, calling for the Nahida-anon. A friend of mine is doing a shokuhou misaki lora (his first lora) and I just told him to throw as many steps as possible (as it always worked for me), but it cooks the lora before the eyes are done. I know you had the same problem with nahida so bottle to the sea. First one is dadaptation, second is adamw8bit, both at dim 64. 1300 steps per epoch for 3 epochs, with a few good images with repeats. https://files.catbox.moe/k1y7qh.png >>13497 I did a v4 like 3 weeks ago. Haven't been able to train much since but I have some ideas to get back into it and make that one better. It's much better at the style but cocks are a bit hard, though getting cocks right from a single artist is maybe not that feasible
>>13501 based, link it whenever pretty please, i lost the link in my sea of mega bookmarks >>13500 way too different to be xformers, i generated it a few times and the only noticeable change is a slightly thinner thigh. is there a link to pruned b64v3 or do i have to prune it myself?
>>13502 It got posted on gayshit, but here's the link to the mega : https://mega.nz/folder/53gW3bbK#NxXyJ0l9T3JyAXolJfpBDw
>>13502 i pruned mine with stable-diffusion-webui-model-toolkit and i see the same model hash on other people's catboxes using models labeled b64v3 pruned, i assume that means something
>>13501 you can download the nahida lora and view the training info in the webui. I don't think I actually had any problems with the lora overcooking. I used cosine LR scheduler, which is supposed to prevent frying. I also use 1 repeat since that also helps with preventing frying. One limitation I had when training Nahida is that I used 1 batch size due to limited VRAM, but I've gone back to using 2 batch size but with gradient checkpointing enabled (This is actually faster despite gradient checkpointing being slower since you train 2 images at a time instead of 1 at a time). As for dims, I recommend going with 128 and downsizing it later with the resize lora script. I usually keep alpha the same as dims. I think the real secret in how I got the eyes working is how I tagged the images. I just looked at all the images in the largest sized thumbnail in windows and if I can immediately recognize the pupils I tagged it, but if I can't discern the pupil shape in the thumbnail I didn't tag it.
>>13505 Which one is yours? I see a few on gayshit
>>13504 >>13502 >>13500 >>13499 made a pruned one with the same hash and uh nope, it's not that. also disabled xformers and guess what? it's not that either i'm guessing hardware differences at this point
>>13506 here https://civitai.com/models/16923/yet-another-nahidagenshin-impact-lora I looked at new new Nahida loras, but what bothers me is the chink one with around 500 downloads. Why the fuck would you provide sample images but scrub the prompt metadata. The whole point of sample images are to provide example generations
>>13508 Thanks, will look at it since the settings I gave him probably only work with my blessing of not being able to fry loras. Yeah stripping metadata is weird, but then again civitai lmao
Now that's a strong dick. That's like what, 35kg?
>>13508 Answered yourself with >chink They're the most secretive little bugmen fucks on the planet with their shit. Having to bother with the ring-around with chink koikatsu cartels and their 4 secret hand signs to get entrance to their clubs is the stupidest shit.
>>13480 Better to give the option because otherwise naive users would definitely become susceptible to it. Semi-related, sharing this here. Someone made a metadata injector that stores data in the alpha channel. It works basically everywhere (Discord, Twitter, 4chan, etc) because no site accounts for this currently. I wouldn't doubt if this gets shut down by those sites pretty soon though. https://prompt-protector.aibooru.online/ https://github.com/ashen-sensored/sd_webui_stealth_pnginfo
>>13512 jesus christ anon you can't just drop a masterpiece like that out of nowhere
...and submitted a supermerger PR for saving recipe. https://github.com/hako-mikan/sd-webui-supermerger/pull/44 >>13499 GPU model is known to change the final output, I don't remember where I saw it but there was an XY plot with different cards tested that showed the differences
>>13514 I didn't make a plot, but I had made a mention of this back in January when I upgraded from a 3080 to a 4090 that I got slight changes outputs when testing old prompts for calibration purposes.
>>13514 yeah i remember it too, have it somewhere. iirc the 10xx series was the most consistent with itself and the most inconsistent one is the 30xx
>>13491 well this is actually really amazing i was already using dynamic thresholding so i guess this could also be a difference in upscaler.
>Do gitpull to latest build >suddenly getting slightly faster and better looking generations >didn't even do xformers 2.0 yet the fuck is this black magic?
>>13518 >better looking generations This is 100% placebo. Go back and regen some old images. If they turn out different something is definitely fucked.
(2.00 MB 1024x1536 input.png)

(2.00 MB 1024x1536 output.png)

(142.47 KB 670x976 ImageGlass_rB9L01QVpY.png)

>>13512 I was actually thinking about a possible solution to that recently myself, but making use of the full rgb channels instead of the alpha. I'm struggling to find motivation to continue with it because I really don't like webui spaghetti code. but I can probably find enough motivation to release a v1 then shelve it. example below. first image is without the encoding, second is with, third image is showing how many pixels are actually being used (top of the image).
imagine what this shit would be like if we had a proper (ideally standalone) UI for this and not this gradio shit or if SD didn't rely on p*thon
>>13519 Yea I think you are right, old gens look the same.
>different GPU series have different outputs THE SUFFERING NEVER ENDS (it must be because of the chips each version uses changing things tbh)
>>13520 You can always write it outside of the webui. If it seems useful then somebody more familiar with the webui code like myself could implement it.
>>13521 I think we'll only get something like that/equivalent to llama.cpp/whisper.cpp when we get public models with more hyperparameters. The hardware limitation for the average consumer isn't there yet like it is for LLMs.
Idk if futa is okay here since the h/d/g thread here is basically dead still trying to get a good mic of futa loras that don't always make the futaveiny generic cock >>13521 I hate gradio so much it's unreal
>>13521 gradio ought to just hire me so i can fix their shittastic ui decisions, its like none of them use webui with a fully loaded extension kit so they dont understand how goddam hard it is for porn-addicted anons to use my language of choice for burning down the p*thon moat would be golang or elixir
Been a while since I posted a gen. Criminal lack of angry maids, forgive my lack of inpainting in some areas. How you been anons, what's new?
>>13528 >my language of choice for burning down the p*thon moat would be golang or elixir whatever would give a speed boost tbh
>>13526 ideally post there yeah but i don't really mind
>thread is approaching 1000 posts >pc starts freezing on every refresh/post my ram gets here today but it's 4 am and if i go sleep now i'm gonna miss the delivery but that also means staying awake for like nearly 30 hours well shit
>>13524 well, then it's basically done, I do want to play around with possibly getting a buffer in there instead of using an end token, might speed things up a small bit, but to be fair, it's already nearly instant for the normal prompt length. I think I do need to include a few more things before I can call it good though, namely a way to set the prompt according to the prompt in the meta, and a way to modify a bunch of files at once. I guess i'll at the very least try to make a webui version though, might base it off of https://github.com/ashen-sensored/sd_webui_stealth_pnginfo
>>13531 for me its not speed, its dev usability python is a shit because you cant reindent the entire file easily and you cant tell what block youre on without scrolling up or down, not to mention single-line lambdas only just because thats what fucking guido wanted, lack of good functional programming syntax, idiotic packaging/import system... but to redo it all in whatever sane language youd have to have the productivity of dozens of salaried data scientists with phds in machine leaning churning out new features and bugfixes daily all of whom refuse to use anything other than python, its fucking hopeless
>>13527 funnily enough style loras are the easiest of them all to train. I don't have to curate the dataset, do manual tagging or anything, just download all the images of an artist with grabber, auto crop, auto tag then run it through the trainer with my usual settings. It's so much easier compared to when I want to train good character character loras where I have to manually select the dataset, manually tag shit, reiterate on improving the data set and training multiple times. Btw, you have any artist lora ideas?
>>13530 (me) mother fucker I somhow accidentally clicked [Restore faces], Jesus Christ. Deleting my embarassing post.
(163.82 KB 753x1280 pawoo_2939116_254277.jpg)

>>13537 not him, but soichiru yamamoto / udon0531 on twitter / udon122 on pawoo seems like an inexplicable oversight
>>13539 didn't mean to spoil that, just a cute ranma-chan
>>13537 >with my usual settings could you share them? if all goes well i might have a working 3090 rig within the end of the week and i want to trains some styles myself >Btw, you have any artist lora ideas? i was cleaning up and cropping a dataset for s16xue a few days ago but i stopped cause i got busy could try spring2013 https://www.pixiv.net/en/users/53828110/illustrations or wokada but he deleted most of his pixiv stuff https://www.pixiv.net/en/users/552918/illustrations
>>13538 Don't feel too bad. I was testing prompts for a Lora on Civit, one of the example images that I copy pasted settings from had [Restore faces] turned on. Was genning for a solid 30 mins wondering why the simple lora fucked every face up
>>13539 >twitter >pawoo I don't think I have a scraper for that so I'm probably not going to train these, but I'll put soichiru yamamoto in my queue >>13541 I'll give spring2013 a shot. Settings are here. Ignore the resolution settings, I started using higher resolutions and bucketing https://pastebin.com/BCgqGT2c
>>13543 top fucking kek
>>13541 wokada should be on danbooru still, not many bad_id uploads there https://danbooru.donmai.us/posts?tags=wokada+&z=5
>>13544 i use gallery-dl but i think you need to provide a login for both sites these days
>>13547 you can use the --cookies-from-browser option if you're logged into the sites in a browser already to not have to reenter the credentials
>>13548 oh, neat
>>13469 does this thing also affect other samplers too or just the DPM++ ones?
>>13551 doesn't alter any of the existing samplers
(1.93 MB 1024x1536 00005-1328581108.png)

(1.98 MB 1024x1536 00008-1890136655.png)

(1.85 MB 1024x1536 00061-1188184994.png)

(2.05 MB 1024x1536 00003-4001447748.png)

(1.82 MB 1024x1536 00009-1008052366.png)

(1.80 MB 1024x1536 00014-3026873708.png)

(1.76 MB 1024x1536 00149-1874607645.png)

(1.76 MB 1024x1536 00003-366441165.png)

>>13553 dear GOD THAT ASS
(2.51 MB 1024x1408 38671.png)

cool
>>13521 >>13526 >>13528 Look at the bright side of this, the larger community using Midjourney is stuck using a fucking Discord bot.
>>13521 Torch has a cpp interface, if you really knuckled down you could probably port most of the ML shit without much trouble. Then it'd just be a matter of writing a UI with like Qt or something. Of course by the time you finished there'd be like a dozen new groundbreaking features developed in the mean time that you'd have to port over too for anyone to actually want to use it.
>>13558 C++ is kinda a nightmare though, I'd at least want a language that I can hotload changes in and poke around the state with. Some lisp variant maybe. Or any language with a networked REPL really. I hate having to restart webui every 5 minutes to test the smallest possible change, it's practically what dynamic languages are made for and yet Python's totally fucked import system makes it impractical.
>>13558 Also how would extensions work with pure C++? I guess you would load some kind of dynamic library compiled against each platform?
>>13559 I vastly prefer compiled strongly typed languages because you tend to catch most mistakes at compile time instead of having to wait til runtime. But I agree, pretty much anything would be better than python. >>13560 Yeah, dlls/sos probably. There's really only two platforms right?
This fucking ass shot took forever, but my dick is satisfied.
>>13561 It's a tradeoff I guess, I don't like compile cycles and inferred typing is good enough for something like webui where correctness isn't as important as say a game engine. Plus unittests can catch some of the typing-related errors. More like three platforms if you count both macOS and Linux. And there are more architecture variations now that Apple Silicon is a thing. Then there's the issue of C++ portability and setting up the entire toolchain, which for all of Python's packaging issues at least it's fairly simple to just install the interpreter. And just wait until you need to reach for Boost or something... Anecdotally I tried installing a distro of pure data, which uses C extensions, and it was a major pain in the ass to get anything working because there were API incompatibilities and binary-only plugins that were years out of date.
>>13563 >people actually use macOS I refuse to accept that this is true.
>>13565 NTA but unfortunately that is true. Someone in some higher position that is tech illiterate always tries to force IT to shove Macs down people’s throats, no matter the industry.
>>13568 >seems like xformers supports torch 2 now for anyone who's not using using it >2 days earlier not sure what this means because I built my xformers with 2.0 cu118
>>13569 it means he's too retarded to google a handhold guide to manually build xformers
>>13570 makes sense given that most of the automatic forms of xfromers will give you the built version on the same version of torch and vision that the webui or LORAs originally used. Still don't understand why Kohya or Voldy didn't implement Torch upgrades themselves while Voldy decided to go full retard and do a Gradio update that literally broke everything (still not sure if that has been fixed but I never pulled since people posted about extensions and other things no longer working)
(1.91 MB 1024x1536 00155-409842644.png)

(1.88 MB 1024x1536 00157-409842646.png)

(1.92 MB 1024x1536 00139-2431285649.png)

(1.86 MB 1024x1536 00150-1285515663.png)

i love me!me!me!
>>13571 I am on the latest commit and Gradio hasn't given me problems. But I did do a fresh git clone because I was expecting it to break.
BasedMix anon, not sure if you noticed but Hll4.3 beta was released if you wanna test it https://huggingface.co/CluelessC/hll-test/tree/main
>>13566 >>13573 >frash git clone I should do that sometime. I just put git pull in the startup script since day 1 and never had any issue that required a fresh clone
>>13574 did hll anon make a Locon out of Beta 3? Huh interesting, I'll test it out because mix 1 of Based66 is done I'm just testing it out with a recent LORA I baked to see if it works well with details
>>13573 that makes sense, I'll do it myself to see if a fresh clone will make things work smoothly.
>>13576 https://boards.4channel.org/vt/thread/46457300#p46479113 I'm just gonna post his original post. He also provides some more training details helpful for finetuning.
>>13578 so they used Lion this time... yeah I had a feeling that optimizer would be good for Finetuning given it learns harder which cuts the needed training steps for Adam to be cut in half. Not sure why the orange mixes still have that strange realistic look while the example grid posted looks very anime, that was the one thing that turned me off when doing Based66 mix 1
>>13579 https://litter.catbox.moe/v0qmzr.jpg https://litter.catbox.moe/7cl7yu.jpg but it seems to look better with the other mixes, it just had a look that I don't really like with the AOM mixes. Maybe I'll test out P3 and make sure that AOM3 is placed later within the mix, I'll just have to try to see if I can get that same sharp anime line look I got with Based 66 mix 1
>>13578 >trained on AI images on the previous revision Whoever fucking told him to do that should be fucking shot
>>13581 Aside from that, didn't someone mention that Kohya's implementation of reg_imgs is broken? I just brought it up because I recall people putting AI generated images into that folder when training
Is Based66 uploaded somewhere yet or is anon still working on it?
>>13580 I still don't understand how to properly place models on the mixing chain. I wanna test my finetune with more merges but it's time I'm not putting into my dataset clean up. >>13582 I remember hearing that as well, but that reg images wasn't necessary. From the little hll anon has mentioned here and what I could find from his old posts on /vt/, doesn't seem like he ever used them. EveryDream2's does work however so maybe an attempt was made to use reg images?
>>13583 still working on it, I'm going to test out hll4p3 in the recipe now.
>fresh build of webui with new gradio >do pytorch upgrade and pip install my built xformers >XFORMERS DOES NOT WORK ON THIS BUILD IF YOU'RE A FACEBOOK EMPLOYEE yeah just stick with torch 2.0.0
does anyone use the loss graph to narrow down which which epoch of the lora you want? I just pick the lora version at the dips in the graph and if the sample image is good then I use it
>>13587 >the loss graph this is new to me, I thought loss rate didn't really mean much with sd-scripts baking
>>13587 I'm not sure loss is supposed to quantify "aesthetically pleasing to humans". Maybe using one of those aesthetic classifiers alongside the autogenerated images would be better?
>>13590 Sometimes I find reducing repeats so that there's 800-1200 total seen images combined with 5e-5 - 6e-5 LR helps reduce burning and improve quality. And I always train for 10 epochs across 3-5 different configs selecting the best out of all configs/epochs. But that is my personal experience.
>>13586 Were you trying 2.1.0?
>>13592 yeah the latest build from today, it worked last time and I could generate stuff but this new git clone with the gradio update gave me that weird ass error
>>13587 Is this a GUI thing? I still use the powershell script like a boomer
>>13594 it's an option in sd_scripts, even your powershell script can do it, just need to set a logging folder
>>13590 should've tried with dynamic cfg since it seems the newer one does better at with a slightly lower cfg, but I already tried doubling again for the lulz >>13591 These two were done at 2e-4 and 4e-4 unet LR (half that for Tenc). Never tried going as low as you, but I am around 1k total images counting repeats on the holo folder, and I was doing 12 epochs at first, and this one is a try with only 6. Maybe bumping weight decay to 0.1 helped here? idk I'm just going on gut feelings but I'm bad at comparing loras sometimes, like here
>>13596 >doubling again >runs slower huh
>>13596 The momo one I did was at 5.5e-5 LR and works with style loras pretty well. I increased the training images to 3500 total and was considering lowering it even more, but after resizing it to dim32 it seemed to work alright with a lot of other loras mixed in. Would probably not do so many repeats next time just to be sure.
>>13595 huh, I do have logs already set, guess I'll give it a check for next time.
>>13069 Always fascinating to compare my old loras to new ones Selfmade was from february and posted some threads ago
>>13600 I didn't even realize that somebody else had made a tenroy lora before me, I just thought it'd be fun to try and make a style lora on such a small dataset. seems like both of our attempts were pretty close to each other! though... mine is seemingly more horny, seeing as mine had two images of nipples visible compared to yours. pretty cool! I'm actually really happy with it even though it still isn't exactly tenroy's style (which is so inconsistent that you probably wouldn't be able regardless of dim size) what do you think of my version? from these previews your version looks pretty good too!
(2.05 MB 1536x1536 catbox_vpq79u.png)

>>13601 I posted it a while ago >>6461 while I was experimenting what is really needed to make a good style lora Fun fact is that at DIM8 without text lr you get smol 6mb loras which cannot compare to bigger ones but were useful for research Your try seems not bad and I always love to see new guys making lora Even better if I can compare them to my tries
>>13602 I actually had a smaller sized bake, but I was trying to get more consistently, to be fair though, I'm not really new, i've been here from the beginning, I think I just missed your version. I released the dim16 conv dim16 version, I am thinking I want to dynamic resize it down and see if I can cut it's file size further. I was able to get a pretty high quality dim4 conv dim8 tab_head locon made, which is 13mb and really close to the original style, but I went higher than usual for tenroy because I was chasing consistency.
no luck with my new xformers, getting the no module error even if I do a force-enable-xformers command line however I noticed in the launch.py file >xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.16rc425') wonder if I have to get rid of this
>>13604 yep that fixed it, if anyone wants the xformers I built on the latest dev build for torch 2.1.0 along with vision 16.0 and cu118 here you go https://pixeldrain.com/u/Mj66zVXK
>>13557 The FUCK? You need to prompt for shit on MJ through discord? >>13565 I've been using it more or less daily for the past 3 years, if you're a musician there's literally nothing quite like the macOS audio stack. It just works. Literally just works.
>>13606 I think those production uses of MacOS makes me want to make sure that my AM5 system has its main bootdrive be linux so I can run MacOS and Windows11 under a more proper VM
(1.74 MB 1024x1536 catbox_07wc1o.png)

>>13603 sorry for calling you new but to differentiate between anons is hard while I like smaller lora we have to make compromises sometimes that said I still have not found a good rate for locon which is why I am still making normal lora at dim8
>>13607 >macOS under a VM Need to read up on virtualizing a pre-existing installation, sometimes I'm not gonna feel like switching back and forth by rebooting. As soon as I'm done with my AM5 build I'm gonna dedicate my old sata SSDs to MacOS and install it there, thankfully all my files and libraries are on a portable RAID so I just need to reinstall some software and hope that none of it uses 32-bit shit
(1.68 MB 1152x1152 catbox_5wvx5w.png)

(1.68 MB 1152x1152 catbox_zh2r6a.png)

(1.67 MB 1152x1152 catbox_m5ezov.png)

(1.66 MB 1152x1152 catbox_o61awk.png)

https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/fggV2Qrb added shion, jo'on up next unless I get distracted (I will)
>>13610 YOU MAGNIFICENT MOTHERFUCKER (i'm the one who asked about her ages ago)
>>13609 I forgot but there is a way to do it, it just requires more work to enable compared to running a windows 10/11 IOS under a VM in linux but yeah your idea works too, surprisingly the B550XE motherboard I bought came with a NVME SSD pcie gen 4 expansion card so if I wanted to take advantage of a sale and wanted to put linux or mac on a drive I could do that, it wasn't even one of those lame expansion cards some motherboards give you that can only fit two the one I got can fit 4 which was a steal for the price I got for this AM4 motherboard
>>13612 I meant a macOS VM under Windows but yeah, I'm gonna install it on the SSDs first, make sure it works and then try to virtualize it
>>13612 but I mostly got the board for the SLI support and the X570 boards under 600 bucks were already 3 slot bridged between their 4th gen 16x lane pcie slots so I got lucky, the x570 motherboards did have a LOT of USB slots though which is something I wish this B550XE board had, but oh well given that the motherboards for AM5 are already overpriced I will buy an X670 board when I decide to build that machine for my 4090
>>13608 oh no, I wasn't actually annoyed that you called me new, I was just saying! I found that I can bake good styles at dim4 conv dim8, you should give it a try, 13mb files vs 6mb, not much different, but can replicate styles much more. even though I was pushing dim 8 really hard a while back, I now think dim16 then sizing them down works much better in most cases. often results in slightly higher file sizes, but usually is much better. though it does take a few resizes before you get it right, which is why I was working on a better way to queue resizes in the easy scripts recently (which is done, just want to also get that xti popup script running).
(1.71 MB 1024x1536 catbox_vl8o18.png)

>>13610 thanks for stinky neet god lora always nice to see more 2hus
Honestly wondering what do you think is the most important thing regarding a GPU's specs when it comes down to training and generation? At this point for me it's been VRAM but what about clock speeds and such? Does that really make a big difference? I already understand that there's been proof that different series of NVIDIA cards have slightly different outputs on the same seed but that's something that would make me go schizo if I look too deep into it.
>>13617 >training VRAM, since it allows bigger quality gains than what's possible with smaller cards >generation speed, so I can test out lots of batches quickly but in practice if I wanted to upgrade my card that badly I would skip the 4090 for the next card that has 48GB of VRAM, and I would be able to improve both factors
>>13617 VRAM first, CUDA core count/architecture second
should i pull...
>>13621 No it's still a hellscape of shitty code and voldy left again after he dumped it all to play vidya
Is the Teto LoRA on gayshit the only one?
sometimes my shit just breaks after reloading the UI when I add new loras and want the autocomplete extension to pick them up "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_mm)"
Yeah seems like I can still gen good shit with the fresh install of the webui and the latest dev build for torch along with the latest build of xformers I made with that same recent dev build for torch. Also I'll release Based66 in a bit just have to get the MEGA and Civitai upload ready
Why hasn't anyone tried to hack shit into the leaked NAI front-end?
(1.74 MB 1024x1536 catbox_bdbbiq.png)

(1.77 MB 1024x1536 catbox_diql1g.png)

(2.02 MB 1024x1536 catbox_hs9ah7.png)

(1.79 MB 1024x1536 catbox_v6i0yy.png)

I need more 2hus while looking for artists which have no lora and seem interesting to try something out
>>13628 do mike please and thank you
>nude pic >upscaling adds clothes BUT WHY THOUGH
>>13624 it seems like it, I baked that one eons ago when cropping to 512px was the norm. all things considered it works pretty well except for the floating hair drills.
>>13631 don't use latent, especially at higher scaling use shit like remacri with 0.3-0.4 denoise
(1.73 MB 1024x1536 catbox_44wjer.png)

>>13630 will try my worst please don't expect great things
Too much wildcard kills the wildcard. I'll need to remove the feet wildcards sometime. Running a few more cuz I still can't decide between the two
(1.29 MB 1024x1536 catbox_4mnm0k.png)

>>13635 sorry for the question but have you considered to clean your dataset so that you don't get the artist watermark/name 9 out of 10 times?
>>13621 I pulled... out
>>13636 I have, but it's quite a lot of work. May do that when I reorganize the dataset
>>13627 I don’t think there is much interest. The models were the only thing people cared about. And now we have to hope some can snag those new samplers.
>>13639 >I don’t think there is much interest. Shame, it's smooth and it works. Would use it over this gradio shit if I could do loras with it
(1.62 MB 1152x1152 catbox_6zxkoe.png)

(1.48 MB 1152x1152 catbox_mnj8r2.png)

(1.62 MB 1152x1152 catbox_h4pa7a.png)

>>13640 I've never actually used the front end, I just know I have it since I downloaded both halves of the leak. How does it work?
>>13642 I don't know if you can set it up with both halves, the package I downloaded was named "NAIFU" and the front-end was hacked together to work locally https://rentry.co/sdg_FAQ#naifu-novelai-model-backend-frontend
(2.32 MB 1536x1536 catbox_cutmsz.png)

(2.24 MB 1536x1536 catbox_fantum.png)

(2.35 MB 1536x1536 catbox_ra8bvz.png)

(2.11 MB 1536x1536 catbox_sl42ce.png)

I've seen the future and it works
>>13644 very cute style but is that it?
(2.22 MB 1536x1536 catbox_ki3doh.png)

There was a time where we thought it couldn't get better than NAI.
I like how a lot of anons got deep into building PCs and upgrading their gear/building new sets just for AI, my final goal is a 48gb VRAM GPU and a threadripper set- OR maybe my final goal should be a server room
Not precisely stable-diffusion related but since we now have AI for everything, how come nobody has tried to use AI to create voiced porn with the original voice? Would be extremely hot if we could create porn with the actual characters voice or something like Koikatsu characters where it's voiced by the actual canon character instead of a generic voice.
(254.10 KB 1200x675 mccover.jpg)

>>13649 anon...
>>13649 The only good option is paid (ElevenLabs)
>>13648 I was gonna upgrade anyway tbh but I could've saved 1K if it wasn't for the 3090 I got specifically for AI stuff
>>13650 >she knew about the horse audio kek gotta love how far some shit from 4chan reaches
>>13653 she's absolutely terrible and somehow managed to be even more trashy than keekihime but at least the horse memes make her tolerable
>>13654 I just don't watch major corpos anymore, I gave up after the "improve yourself" shit from Amelia my former oshi after the Tempus bs. Incel behavior? Yeah but before HomoEN she would have never said shit like that, waiting for more advanced AI vtubers desu
>>13655 that's why i only watch gura from EN, gura fans vindicated once again gonna take a break from chuubas for now, too distraught over my boing boing kettle girlfriend
>>13650 Whats the horse meme?
>>13656 same, I hope she comes back but knowing it's vshojo... I hate being a vtuber fan sometimes but I agree with Gura, she's holding the line against a lot of the changes I hate in the EN vtuber community, just hoping that Cover's dumbass can get them perms for RE4 Remake EN so she can play it and give chumbuds a great experience
>>13657 AI audio of her sucking horse cock kek
>>13657 found it holy kek can't believe it's still up on youtube https://youtu.be/TTjDDzyWYNQ
>>13658 gura good because she manages to be the only sane woman in EN when it comes to what chuubas should do/be AND she manages to filter out normies at the same time >>13660 that's not even the good one kek
>>13630 little sneak peak as I go to bed for now when I wake up I will try to rebake and polish it a bit more
>>13662 looking good already! gn anon
>actually got the hang of d-adaption now >still make LORAs with adamw8bit and lion
(1.84 MB 1152x1152 catbox_d27ndo.png)

(1.68 MB 1152x1152 catbox_v0okag.png)

(1.58 MB 1152x1152 catbox_jzcngh.png)

(1.65 MB 1152x1152 catbox_zsctv8.png)

and here's jo'on: https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/rtYiAZ6T One thing that was really surprising is that it actually renders her dress as sleeveless in shots with the jacket off-shoulder even though there were no off-shoulder shots in the dataset, I've trained other characters with layered clothing and it always tried to make short/sleeveless underclothes long sleeved. >>13662 >>13665 cute mikes >>13664 same desu, though I do make a mixture of adamw8bit and d-adapt LoRAs, basically depends on how much I care about tinkering with hyperparameters
upscaled it again with remacri, still not completely satisfied with it
>>13667 try 4x ultrasharp
>Use latest working commit WHY NOT JUST ADD A OPTION TO SIMPLY NOT UPDATE YOU WERE SO CLOSE
>>13669 there is an option. it's called not git pulling, dumbass
kek people on 4chin are saying they like me as a mixer because I don't tripfag but based66 is going to be the mix that I upload to huggingface and civitai... it's over... "the numbers? what do they mean peko?!" (but I still agree that based65 final mix wasn't as good as I was hoping it was but it was probably my fault for not testing out more MBW presets for the last two models I put into protomix)
>>13670 did you even look at the screenshot before posting? that's from the colab repo
>>13671 but I'm wondering, did people have this error with final mix or protomix? >very slow, wayyyy more black spots if i'm not running no-half. Because the only issue I had with 65 was how it needed more effort/outputs to have a more coherent output with LORAs
>>13669 anon that does the same thing as not updating, you are downloading a version before the update
(1.71 MB 1024x1536 00067-1051832255.png)

(1.78 MB 1024x1536 00068-1051832256.png)

(1.74 MB 1024x1536 00066-1051832254.png)

(2.15 MB 1024x1536 00069-1051832257.png)

>>13671 based64 is way more versatile, but i've never had issues with black outputs with based65 lol
>>13675 hmm maybe it's something with their install? I was running most of my LORAs outputs with 65 on the torch upgrade and never had issues. Well we already know that every GPU has different outputs and behaviors so... just fucked RNG I guess kek
When is Klee's mom coming out in Genshin Impact, I want to do Oyakodon with a LORAs baked of her along with the Klee ones
>>13677 I don't think they'll add Alice until we get to like, version 6 or 7 because she's a walking loredump. For reference, we're on version 3.5.
Tangentially related, site that seems very likely to be using GPT4 and is currently free, get on it while you can if you want to experience it. https://app.embedbase.xyz
>>13679 >Click schedule a demo button like a moron >Automatically logs into my google account and tries to actually schedule something with the maker >Close tab in a panic I hope I didn't accidentally schedule anything
(1.83 MB 1024x1536 00073-1051832261.png)

>>13676 >hmm maybe it's something with their install? i don't know, but that general is getting so insufferable and retarded to the point where i feel like every time i visit it the average user iq has gone down by 10 points
>getting nan errors when upscaling yeah I'm going back to the old gradio, fix your shit Voldy, the last commit wasn't this autistic
>>13676 i get occasional blackscreens from most VAEs other than kl-f8-anime2 and the original SD one
>>13681 advancement in the cutting edge has slowed down and become a lot more effort-intensive, and at the same time the barriers to entry have come down a lot. i check the /b/ threads every now and again and the average quality has been in steady decline since like january because the oldfags largely stalled or left and the newfags are too dumb to improve even with (because of) all the helping hands that are available.
>>13682 nah I fixed it, just had to switch models and do a hard restart, weird shit. I was planning to move everything back to my last commit SD folder kek
>>13643 Oh yea, I have that too. Didn't remember if that was part of the leak or not. Maybe someone could give Voldy a reach around to work on it in secret?
So what's the current meta for easy scripts install? Which torch? Is Triton needed? couldn't get it to work
>>13643 did anyone work the text frontend into anything usable?
>>13686 That would be nice. The newer NAI one from 3 days ago is actually pretty damn good minus the retarded choice of bringing the prompt stuff over to the left but yeah. This one could probably be hacked to incorporate more shit.
>>13687 triton is only needed for torch 2 which is 2.1.0, torch 1 which is based off 2.0.0 is the most stable build because it's already done
>>13687 I wholeheartedly recommend the Triton even though I'm more of a Motif fan. Whatever you do just avoid the Fantom, terrible value.
>>13690 Tried torch 2.1.0 with and without triton and got some weird cuda kernel image errors, so should I just do 2.0.0?
>>13692 yeah, it's fine 2.0.0 and 2.1.0 are both fast and work with equal speed boosts, you only really get the huge speed boost if you're on the latest lovelace chip
>>13693 I generated both via easyscripts, can I just slot them into webui's venv or do I have to do something else?
>>13693 Thanks. I have an ol' laptop rtx2060 so it's weird that it can't build 2.1.0 but if it works on 2.0.0 then no problem
>>13694 you might be able to, but you will likely need to also activate the venv and run through the requirements of webui though, I don't think everything that webui needs gets installed via sd-scripts
>>13694 if you want to install it into the webui >do this within the launch.py file torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==2.0.0+cu118 torchvision==0.15.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118") you can easily rename your current venv folder to venv2 if you want to revert, make sure to clear everything within the webui-user.bat file, launch it again let it create a venv folder built off of torch 2.0.0 and once that's done close it and then open a powershell script within the root folder of stable-diffusion-webui .\venv\scripts\activate make sure to remove the xformers line beneath the torch command line within the launch.py file before doing the next step next pip install this file https://pixeldrain.com/u/fcRjcPyW (make sure you have the full file name) then the next step will be to deactivate just type deactivate within powershell, after that add back you old command line args within the webui user.bat file but ensure that you change >--xformers with --force-enable-xformers after that you can check if your gradio shows the torch 2.0.0 and the xformers you installed, but after that you can just change back to --xformers.
>>13696 >>13697 Thank you much anons
>>13697 >you change --xformers with --force-enable-xformers Did voldy fix this? I remember that if you have xformers installed, using --force-enable-xformers won't actually recognize xformers is there and you fail to import it, but just using --xformers is fine
>>13699 it should work, I realized that the automatic xformers install that's within the launch.py file was conflicting with the xformers I built on 2.0.0
>>13700 >>13701 mine worked right away as soon as I removed the xformers enviror get line within the launch.py file
>>13698 that's actually really good, i like how it retained the noise/grain and the leotards kek
>>13693 >>13695 Doesn't work on 2.0.0 either, still the same error >RuntimeError: CUDA error: no kernel image is available for execution on the device
also if any of you want to know if you can do this on the latest gradio build, yeah you can do it just make sure to backup your extensions and do a fresh gitclone
>>13704 uhhh did sd webui work normally before you updated? Because I never seen that error myself the only shitty one I got from updating that I fixed was a NAN error and the xformers one
>>13706 This is easy training scripts. It worked on whatever the last stable pytorch version was, but not 2.0.0 and 2.1.0 I have both CUDA 0.16 and 0.18 installed so it should work
>>13707 weird I only noticed errors for updating torch with the webui, you can redo the install but did you do it through the update_torch.bat file?
>>13708 I installed from scratch with the v6 installer. I'll try doing it through the script if it's there since it's old v5
>>13709 yeah that works, I tried updating the xformers that easyscripts installs with 2.1.0 with the Xformers I built today with the latest dev build but it gave me an error. you only really need and can do your own custom torch upgrade on the main sd-scripts folder and the GUI which I can't be bothered with the GUI because I thought I broke something but all my Locons ended up fried because the GUI dev is retarded and didn't enable the option to disable cp decomposition which is only useful for LOHA
>>13705 it works on the old build too, I only really recommend that 4000 series users fuck around with the dev builds for everyone else just do 2.0.0
(2.10 MB 1024x1536 catbox_esgndr.png)

(1.99 MB 1024x1536 catbox_o9wtoe.png)

(2.16 MB 1024x1536 catbox_el7ls4.png)

(2.09 MB 1024x1536 catbox_ts1a24.png)

might be the first time I'm not posting bunnies
(1.84 MB 1024x1536 catbox_951xdt.png)

(1.88 MB 1024x1536 catbox_s6ovvm.png)

(1.88 MB 1024x1536 catbox_p9n13d.png)

>>13712 very nice
(1.83 MB 1152x1152 catbox_gndcwm.png)

(1.76 MB 1152x1152 catbox_vwqzi1.png)

I think I found a new favorite artstyle mix
(1.76 MB 1152x1152 catbox_ix7g5q.png)

(1.68 MB 1152x1152 catbox_svjz1s.png)

(1.46 MB 1152x1152 catbox_8dci68.png)

(1.83 MB 1152x1152 catbox_fbv6lj.png)

MY FUCKING SIDES HOLY SHIT JEWIDIA JUST WENT "CRYPTO IS USELESS AND ADDS NOTHING TO SOCIETY" https://youtu.be/qlPSNOU2WEA?t=68 i can hear hava nagila blasting from their offices
>>13717 They got slammed for statement already because they tried pushing A6000 as “mining cards”. Essentially, if they knew crypto had no value, why say it after the market crashed and ETH killed miners? Kek
Give me model/mix recommendations guys, I'm at work with fast internet and feel like downloading some new ones. Here's what I've already got: https://pastebin.com/7aTsTGvW
>>13719 Anon.. you absolute fucking retard lmao good job outing yourself
>>13719 >>13720 Sure hope you didn’t download that at work kek
>>13717 hey this is the only guy I like for tech news, yeah it's hilarious after knowing that Nvidia made mining cards based off their data mining cards holy kek. They're just being jews trying to hide their dirty laundry, it's been like this with these kikes
>>13719 Nice, congratulations on putting yourself on a list!
>>13719 why do you faggots even come here when you know that *NO NO WORD MODEL* is banned here
>>13641 The ushiyama ame lora is truly amazing
(438.60 KB 1536x2304 catbox_fta2ci.jpg)

(4.39 MB 1536x2304 catbox_5f94t0.png)

(4.58 MB 1560x2304 catbox_7bc7t3.png)

(4.29 MB 1536x2304 catbox_b37w06.png)

and I am awake again time to make cute cats >>13666 these trips of wealth stealing brat are making me worry
>>13712 Catbox please
>>13729 bro...
(2.01 MB 1024x1536 catbox_26897r.png)

(1.90 MB 1024x1536 catbox_hyuflc.png)

(1.83 MB 1024x1536 catbox_f0qk2q.png)

(1.86 MB 1024x1536 catbox_bmeen7.png)

>>13630 >>13728 https://anonfiles.com/ZaB1Peiaz4/goutokuji_mike_safetensors Still testing as you have to prompt multicolored hair to make her a true calico but seems fine
(2.33 MB 1248x1408 38681-3732507618.png)

Fat asses scarlet sisters begging for it uoh....
>>13732 nice, catbox?
>>13733 Very nice anon. What artist/mix? Puss in box?
Did we get some newfags or something?
>>13736 yeah, you
(1.90 MB 1024x1536 catbox_v233dp.png)

(1.92 MB 1024x1536 catbox_lixa60.png)

(1.79 MB 1024x1536 catbox_3syzt3.png)

>>13710 Updating with the update-torch script didn't work either. I'll just stay on the old one then.
>>13740 what is hll anyway
>>13742 hololive/vtuber finetune model made by an anon from /vt/ very good quality base for mixing and basedmix anon uses it for his model merge
>>13729 You will need the carbox script, but these have been uploaded with carbox, so the prompt should be included
(1.29 MB 1024x1024 catbox_8p9r87.png)

does anyone know where to find any of these loras? - blade-128d-512x - ponsuke_(pon00000) - multi_minigirl_e6 lora I don't know if im just missing some list that has them or not. (the gayshit list says it has minigirl but it was removed from the mega) image as payment
(1.93 MB 1024x1536 catbox_rempm6.png)

(2.14 MB 3840x4096 1680615978607670.jpg)

Who is the artist lora for this? The poster won't say but I assume he is here too
Are (positive) embeds still part of the meta? I saw a discussion about it on /g/ and it got me thinking about experimenting with them and maybe even giving training a couple a go.
>>13748 they are more or less placebo that add random noise and skew your generation style
>>13749 >placebo meh, what a shame
>>13751 Should've mentioned technically LoRAs apply both positive and negative weights which is why you mentioning (positive) brought this to mind
what are some of the current "base" realism models along the lines of F222, Basil, cafe-instagram? I want to experiment with some mixes. And what would be the best way or settings to experiment with when merging real models together?
>>13754 the forbidden model
>>13755 ...except that one
>>13754 id say chillout mix, I think its good but its very catered toward asian faces/features so if you don't want your characters looking like a Korean MMO mobile ad don't use it
>>13745 ah yeah, blade is my lora, it's a bit old and I want to rebake it to not be dim128, but link: https://mega.nz/folder/CEwWDADZ#qzQPU8zj7Bf_j3sp_UeiqQ/folder/iYwwQQpA mine is still on the gayshit repo, it's the v2 version I baked a long time ago
>>13758 ahh makes sense, the filenames didn't match so I wasn't sure it'd be the same one. cute pics earlier btw, I was toying around with them as a base earlier >>13746 tyvm!
Just as I was finishing up on an update to the scripts this update drops... 4 Apr. 2023, 2023/4/4: There may be bugs because I changed a lot. If you cannot revert the script to the previous version when a problem occurs, please wait for the update for a while. The learning rate and dim (rank) of each block may not work with other modules (LyCORIS, etc.) because the module needs to be changed. Fix some bugs and add some features. Fix an issue that .json format dataset config files cannot be read. issue #351 Thanks to rockerBOO! Raise an error when an invalid --lr_warmup_steps option is specified (when warmup is not valid for the specified scheduler). PR #364 Thanks to shirayu! Add min_snr_gamma to metadata in train_network.py. PR #373 Thanks to rockerBOO! Fix the data type handling in fine_tune.py. This may fix an error that occurs in some environments when using xformers, npz format cache, and mixed_precision. Add options to train_network.py to specify block weights for learning rates. PR #355 Thanks to u-haru for the great contribution! Specify the weights of 25 blocks for the full model. No LoRA corresponds to the first block, but 25 blocks are specified for compatibility with 'LoRA block weight' etc. Also, if you do not expand to conv2d3x3, some blocks do not have LoRA, but please specify 25 values ​​for the argument for consistency. Specify the following arguments with --network_args. down_lr_weight : Specify the learning rate weight of the down blocks of U-Net. The following can be specified. The weight for each block: Specify 12 numbers such as "down_lr_weight=0,0,0,0,0,0,1,1,1,1,1,1". Specify from preset: Specify such as "down_lr_weight=sine" (the weights by sine curve). sine, cosine, linear, reverse_linear, zeros can be specified. Also, if you add +number such as "down_lr_weight=cosine+.25", the specified number is added (such as 0.25~1.25). mid_lr_weight : Specify the learning rate weight of the mid block of U-Net. Specify one number such as "down_lr_weight=0.5". up_lr_weight : Specify the learning rate weight of the up blocks of U-Net. The same as down_lr_weight. If you omit the some arguments, the 1.0 is used. Also, if you set the weight to 0, the LoRA modules of that block are not created. block_lr_zero_threshold : If the weight is not more than this value, the LoRA module is not created. The default is 0. Add options to train_network.py to specify block dims (ranks) for variable rank. Specify 25 values ​​for the full model of 25 blocks. Some blocks do not have LoRA, but specify 25 values ​​always. Specify the following arguments with --network_args. block_dims : Specify the dim (rank) of each block. Specify 25 numbers such as "block_dims=2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2". block_alphas : Specify the alpha of each block. Specify 25 numbers as with block_dims. If omitted, the value of network_alpha is used. conv_block_dims : Expand LoRA to Conv2d 3x3 and specify the dim (rank) of each block. conv_block_alphas : Specify the alpha of each block when expanding LoRA to Conv2d 3x3. If omitted, the value of conv_alpha is used. fucking hell, this is either gonna do nothing or completely change how everything is baked... hah, I'll get to work on updating it, seems like it'll be easy enough, though I'll have to drop LyCORIS support until it's updated, so no more loha for the time being. locon still work though because they are naturally supported. And I still need to make the xti script too... ok, lots of work ahead, that's fine, i'll do it, will have to make a new popup for the unet values though, 25 input boxes shaped like a U will probably make the most sense here. I'll get it to work, i'll make it easy enough to use for the end user. might take a day or so though, till then, I'll release the current update I was working on, the resize + locon extract queue update.
>>13760 glad you are enjoying something I made!
>>13761 nice, gonna wait for that update to get gamma for next bakes then I hope it won't force torch 2.X.0 anytime soon though since I can't get it working
>>13764 nah, I'll continue to support the older torch 1.13.1 because of compatibility issues, anyways, gotta write up a readme and release the current update.
>block weight for learning rates If I specify down_lr_weight = cosine, does this mean that IN00 will have weight 1 and IN11 will have weight 0? If that's the case, then I'm guessing the new meta will be down_lr_weight=cosine mid_lr_weight=0 up_lr_weight=cosine for styles and down_lr_weight=sine mid_lr_weight=1 up_lr_weight=sine for characters
>>13761 so was Kohakublueleaf's method for LOCONS never using blocks? Because this method of having to specify each dim for 25 blocks is insane, this is just going to make baking into a nightmare to see which one is the best.
>>13767 I think I'll go back to traditional LORAs at this rate, this new meta is going to require too much trial and error just to release a new LORAs
>>13767 If you have a good dataset, using the highest dim for all of them and then using resize is good enough. I think this is only useful if you have a shitty dataset where all images of your character is of the same style since you only used screenshots from the game they're in
>>13766 possibly, but it can probably be very useful to bake a lora on specific parts of an artists style as well, such as learning the eyes but not the rest or the lighting but nothing else. I can see it being pretty useful overall. >>13767 yeah, pretty much the biggest reason why I immediately frowned when I saw the update. though the good thing is it doesn't actually replace the normal way to train, we can just ignore it if you don't want to deal with it. I unfortunately do however because I'm sure some people would want to use it. >>13769 very good point, I can see a few uses for it, that is definitely one of them
>>13769 but even when I'm mixing with blocks for models the things I've seen and what others have described for what each block influences is "uh it kind of influences the lines or backgrounds" there isn't a clear answer, I make sure my datasets are good maybe I'll use this method if I'm dealing with a "single scene" LOCON but otherwise sucks I have to drop KohakuBlueLeaf's LOCON method now because it never used blocks properly
>>13771 it's only until kohaku updates it on their end, though you could definitely just use an earlier commit of the scripts still too
>>13772 I wonder what Kohakublueleaf's method was even doing differently compared to the traditional block method then, what did their convdim/convalpha influence along with the method Kohya used?
>>13773 I honestly have no clue, I didn't exactly look too far into it.
>>13649 No good free option yet and paid ones watermark their files to possible hunt you down later.


Forms
Delete
Report
Quick Reply