/hdg/ - Stable Diffusion

Anime girls generated with AI

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 15.00 MB

Total max file size: 50.00 MB

Max files: 4

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

/hdg/ #15 Anonymous 06/06/2023 (Tue) 17:53:35 No. 17518
Previous thread >>16264
Been in the mood to prompt either the pink girl from star rail or Nahida lately. Shame discussion for this game is so unusably bad already.
(4.83 MB 2048x2048 21.png)

>>17520 not trying to be jerk but if I were to post things on pixiv or somewhere similar I would try to make sure that glaring things like that left thumb were a bit less shitty
seems more accurate after adding some pixiv data and removing bad tags. but now her sv hairstyle is leaking into the original one. argh.
>>17475 Did you used a lion optimizer for training? Can you share your hyperparams config for it? Had no luck with comparable to adam8 quality on 8bit version of lion.
(968.20 KB 640x960 catbox_scedcr.png)

>>17523 Here's the json, toml, and I threw in the lora and txt file as well. https://files.catbox.moe/qqo1v9.7z The toml has network dropout enabled which the json (and lora) don't have. Aside from that both config files should be the same. The reason I went with lion for 16-dim 16-alpha is because 1e-3 adamw8bit would result in weird overfitting (picrel) whereas lion doesn't seem to. Aside from that 1e-4 lion and 1e-3 adamw8bit are about equivalent from my testing at least. Raising the learning rate any more on either results in garbage output.
(5.27 MB 2048x2048 27.png)

>>17341 >>17488 >>17520 >>17525 Can you not spam your watermarked garbage here please
>>17527 suck my balls
(4.97 MB 2048x2048 24.png)

>>17519 Like for its threads or general discussion regarding the game? Because I’m just hoping with all I saved up that I get Bronie
One of our pixiv posters got nuked because of Pixiv’s new rules, it’s fucking over
(2.36 MB 1280x1600 catbox_1yiks6.png)

When bike shorts gives you bicycle I guess I should have expected motorcycle helmet to give me motorcycle.
(4.98 MB 2048x2048 29.png)

>>17524 Oh, I thought you was using an 8bit version of lion, but thanks anyway. So the recipe just to lower lr 10 times for lion? Anyway, isn't 1e-3 is the most optimal lr for a huge diff between dim and alpha like 128/1? Also noticed significantly worse results with 0.38.1 bitsandbytes, recommended by kohya readme for new lion. Both grids is adam8, v1 on first grid with 0.38.1 and some commit from may and v3 on the second is 0.6.2 sd-scripts with old good b&b, very smooth looking train process on the older one.
(1.86 MB 1280x1600 catbox_azdsk5.png)

>>17535 >Anyway, isn't 1e-3 is the most optimal lr for a huge diff between dim and alpha like 128/1? Interesting, haven't heard about that but seems like a bit of a niche use case. Is there a new lion? I haven't been following developments too closely, haven't felt the need since nothing aside from rescale cfg seems like it will change much at the moment.
(1.87 MB 1280x1600 catbox_hekyml.png)

(1.90 MB 1024x1664 ComfyUI_14699_.png)

(1.95 MB 1024x1536 ComfyUI_14538_.png)

(2.05 MB 1024x1536 ComfyUI_14553_.png)

(1.86 MB 1024x1536 ComfyUI_14801_.png)

(4.98 MB 2048x2048 29.png)

(2.36 MB 1280x1920 catbox_9u56rj.png)

>>17538 Thanks, I had forgotten about this artist. Always liked the shading and not so much the faces
(1.57 MB 1024x1536 00903-1057876634.png)

(1.56 MB 1024x1536 01514-2063387899.png)

>>17536 > Interesting, haven't heard about that but seems like a bit of a niche use case. Its a first decent params that I've read about after alpha introduction in the scripts. Not very good though, style is just crap, but chars are just decent. > Is there a new lion? Dunno about the newish of this, but there is appeared an 8bit lion version some time ago https://github.com/kohya-ss/sd-scripts#optional-use-lion8bit I'm also not completely sure if this bitsandbytes problem I've encountered is not just a windows thing. There is even a newer package in the pypi, but the libs inside it are only for loonix. Btw, what is rescale cfg? Haven't heard about it. Would also be nice if you guide me how can I adapt catbox script so it start to works here too.
>>17538 This is actually very cool good job anon.
(2.13 MB 1280x1600 catbox_lwgltc.png)

>>17541 Catbox script is here https://gitgud.io/GautierElla/8chan_hdg_catbox_userscript Rescale cfg was posted a couple times last thread https://arxiv.org/pdf/2305.08891 Simply put it's a few changes that need to be made on multiple ends to fix the range of brightness on outputs. It also explains why the current implementation skews towards brighter images and is almost impossible to correctly output dark images. It has been known for a while and is one of the main uses for noise offset.
(1.91 MB 1024x1536 catbox_a1fi5w.png)

(1.52 MB 1024x1536 catbox_kmtya2.png)

(1.88 MB 1024x1536 catbox_gkjopa.png)

(1.74 MB 1024x1536 catbox_t0608h.png)

>>17543 So it would be finally available to fix it inside the UI, rather than training loras/models with it, neat. Also testing the script. Seems to work.
Is there any supermerger guides?
>>17532 He got banned or what?
>>17546 nah was for uncensored shit I thought it was because they were using artist style LORAs
>>17538 >>17540 nice. in hindsight, might wanna prune tags like commentary request, highres, photoshop \(medium\), etc.
(1.57 MB 1024x1536 00178-1787772572.png)

(1.38 MB 1024x1536 00022-3810338039.png)

(1.72 MB 1024x1536 00336-2055891221.png)

(1.80 MB 1024x1536 00018-2197699490.png)

(1.38 MB 1024x1536 00022-3810338039.png)

(1.55 MB 1024x1536 00047-254711810.png)

(1.52 MB 1024x1536 00154-139704435.png)

(1.85 MB 1024x1536 00200-405567061.png)

>>17547 That shit kinda weird. I got away posting uncensored shit until I posted cunny ones and it got hammered asap.
(2.12 MB 1024x1536 ComfyUI_14900_.png)

(2.26 MB 1024x1536 ComfyUI_14899_.png)

(1.93 MB 1024x1536 ComfyUI_14898_.png)

(2.26 MB 1024x1536 ComfyUI_14925_.png)

>>17548 yeah need to start doing that... by the way does anyone have recommendations for retouching datasets? maybe an image sharpener or something. i kind of want this lora to turn out crisper but i think the blurriness inherit to his style got carried over
>>17552 If you have photoshop, you can record any sequence of operations as an action and batch run it on all your files. It's only static tho, so if all your images have varying amount of blurriness, then it's useless.
>>17553 Also it's quite slow, as everything is happening in the gui.
>>17553 Or if you really only want to do sharpening, you could use >>17388 to bulk sharpen images in the extras tab right in the webui, as it has a sharpness slider there.
(1.97 MB 1024x1536 ComfyUI_15001_.png)

(1.93 MB 1024x1536 ComfyUI_14993_.png)

(1.94 MB 1024x1536 ComfyUI_14979_.png)

(1.82 MB 1024x1536 ComfyUI_14967_.png)

seggs
>>17552 >>17553 >>17555 Shoebill anon here, you can create a droplet (a way to batch process files by using a PS action) in Photoshop with the Blur Sharpen action I use. I modified it slightly so that it's set to 0.3 pixels (and the gaussian blur dialogue doesn't pop up. Just open the file to import the action. https://pixeldrain.com/u/1whAs5Dv Then go to File > Automate > Create Droplet and follow picrel Once you're done all you need to do is drag and drop your training data root folder on the executable you just created. GIMPfags btfo'd once again. Please note that jpgs will look like shit when sharpened so clean them up first with your upscaler of choice. I'm lazy so I just use waifu2x-caffe with level 0/1 denoising, Cupscale is for people with too much time on their hands (tryhards) P.S. just save your best gens/upscales to a folder and drop it on the droplet, shit works wonders
>>17557 >GIMPfags btfo'd once again come on, I just have a mental block against installing pirated software. no, not because I care about jews missing revenue, I am paranoid about malware. yeah
>>17549 Cute style
>>17558 >I am paranoid about malware I haven't used a single antivirus program in decades (I disable windows defender on new installs) and I get all my pirated shit from rutracker and audioz, so far no russian or hohol has managed to steal anything from me, neither passwords/info nor money. If you're that paranoid just keep some decent AV running.
>>17560 I know that this is stupid but what can I do, I know that this shit would cause anxiety to me so I'd rather larp as a freetard and just use gimp. It's fine for minor tweaks usually and I'm used to its interface at this point. Although now I wonder if photoshop would be usable from a vm. I used to play VNs from a virtual machine and it was fine
>>17561 >what can I do
(1.33 MB 1024x1024 00425-4292881181.png)

>>17562 >Get help No.
>>17556 Could you catbox one of these? Mostly curious about the node workflow but even if it's nothing special then just to see the prompt at least
>>17564 it's an orthodox a1111-style workflow that's shipped as the default in a frontend called comfybox. so it's only openable with that frontend but not too much beyond two-stage denoising + loras. https://files.catbox.moe/l6ksx9.png
>>17565 Ah okay, I'm familiar with ComfyBox too but haven't tried it out again in a while, ty
>>17557 but why would you use photoshop of all things for sharpening lol
>>17567 >but why would you use an image manipulation tool for image manipulation lol Retarded non-question aside, blur sharpening is quite different from your average sharpening
>>17569 >Retarded non-question aside no it's pretty fucking valid. i love how you're suggesting to people in an ai general to sharpen by using traditional image processing instead of just upscaling and actually adding detail with ai. it's backwards and funny because one of the points of ai in image manipulation was to solve those limitations
>>17557 >blur sharpen >introduce more artifacts into my dataset no thanks
>>17571 >>17570 both me btw. not trying to samefag
>>17570 Read reply chains first before you embarrass yourself. >>17552 asked >"by the way does anyone have recommendations for retouching datasets? maybe an image sharpener or something. i kind of want this lora to turn out crisper but i think the blurriness inherit to his style got carried over" And that's what I suggested. >just use AI to upscale and add detail By all means go ahead and show us how to add details to an existing image by simply upscaling without using img2img first. Or maybe learn how to read and realize that I never suggested using it as a replacement for hiresfix/img2img upscaling. >>17571 >introduce more artifacts into my dataset Like I've said, it's shit on jpegs. Clean those up first with cupscale and some upscaler/denoiser.
>>17565 >>17566 Okay it sure has been a while, dang that's been improved. If you're the dev then props to you. It still has this sense of feeling "fragile" to me, but that's probably because I don't yet have good knowledge of everything underlying it like I do with the webui. I see the vision with this though and can imagine I'd be using this more as it's further fleshed out.
>>17573 >Read reply chains first before you embarrass yourself. and did you even look at the images he posted? they all have varying levels of sharpness and aren't just uniformly blurry. you provided a solution to a problem he didn't even propose, and a worse one at that. hell even topaz labs has better software for sharpening >Or maybe learn how to read and realize that I never suggested using it as a replacement for hiresfix/img2img upscaling. oh ok, then he should do that and/or use the image filter extension instead of fucking creating batch actions for an inferior solution in photoshop. cool
>>17565 Not really related to my post in >>17574, but I noticed setting the VAE to anything other than the NAI VAE causes this weird pixelation and blurriness when using the siraha LoRA. Any idea what's happening there? Same thing happens in the webui so it's not some isolated ComfyUI issue. 1st pic is NAI, 2nd pic is the 840000 VAE (or any other one really).
>>17575 >and did you even look at the images he posted? they all have varying levels of sharpness and aren't just uniformly blurry >gather all the blurry images in a folder >batch process that folder >put them back 0.3px is the breakpoint for 2D illustrations, unless you have some absurd res like 8K you don't need to turn it up. If you want to do it manually to fine-tune the value each time then he will. Not everyone is as nigger-can't-understand-hypotheticals-retarded like you are. BTW you DID read the part about him wanting to retouch the DATASET and not the GENS, right? >topaz labs lol >>17575 >for an inferior solution in photoshop Cope and seethe.
>>17577 >If you want to do it manually he* I was directing it at you but then I realized you wouldn't do it anyway.
>>17577 >BTW you DID read the part about him wanting to retouch the DATASET and not the GENS, right? he literally said the blurriness was from the blurriness present in the dataset images, which is replicated in the gens because it's a fucking style lora lmao >lol yes, anything topaz labs puts out is infinitely better than just using the blur sharpen filter, how retarded are you? >Cope and seethe. what exactly am i coping about? i use photoshop and illustrator daily for both work and wouldn't ever touch gimp with a 10 ft pole. i'm laughing at you because your method takes more work for shittier results. hell, even within ps, the blur sharpen filter is one of the worse ways to sharpen an image compared to something like high pass sharpening. obviously though, we're looking for a one size fits all solution, in which case what you suggested is unironically worse than several other easy solutions
>>17579 >compared to something like high pass sharpening. Thanks for outing yourself, that tells me you absolutely don't know what you're talking about. If it's true that you use PS daily for work then it's time to change careers.
>>17581 >no response, you're retarded, i win thanks, that's what i thought lol.
>>17582 Yeah, you're retarded for claiming that high pass > sharpening with vivid light + blur. You also keep claiming that it's more work but it takes 5 seconds to set up a droplet and I've already set up the action for you. Is dragging and dropping a folder on an icon too much work for Mr PS professional? I'm off to bed. You should be off to getting your eyes checked.
>>17583 i love how you didn't even address the first part of my post because you know i'm right >you're retarded for claiming that high pass > sharpening with vivid light + blur. they each have their own caveats, but for what you're suggesting absolutely >Mr PS professional lmao i may have experience in ps and use it for work, but i'm not as arrogant to call myself a professional. additionally, i also don't get pissy and throw a tantrum when someone says i'm wrong, so ditto >I've already set up the action for you. Is dragging and dropping a folder on an icon too much work for Mr PS professional? how dense are you? i was never going to use your shitty action because using ps for what you suggested is exactly what i'm saying is dumb. why would i do that when i already have a better way to batch upscale/sharpen images built into an extension on my webui?
(2.55 MB 1024x1536 ComfyUI_15352_.png)

(2.66 MB 1024x1536 ComfyUI_15264_.png)

(2.20 MB 1024x1536 ComfyUI_15447_.png)

(977.98 KB 512x768 ComfyUI_15344_.png)

>>17574 i still think it's pretty fragile, too. mostly because in comparison to ComfyUI which is a more flexible platform that emphasizes extensibility, mine has no extensions feature yet, so every improvement has to go through me still. i've been wondering how to approach that with how i've architected things. and stuff breaks. plus i got kind of tired (physically) and started another genning spree. for basic txt2img work it's already good enough for me never to go back to webui again. it's everything else that still needs work. recently i've been trying to backport some of the stuff i've incubated in ComfyBox into ComfyUI though, like group nodes. plus i wanted a previews feature so i petitioned comfyanon to add my implementation. too much to do...
>>17576 think it's because i trained with the NAI VAE on both models, for me it kept looking washed out otherwise. i'll retrain without VAE to compare
(744.05 KB 640x960 00122-3707304876.png)

>>17586 >retrain without VAE It's pedantic but if you're using NAI then no vae means stock SD 1.5 vae. Which is fine, but you would also train with ema-560000 or mse-840000. >>17576 Are you sure it's the style lora causing the issue?
(2.36 MB 1280x1920 catbox_fna6wn.png)

(2.17 MB 1280x1600 catbox_6ariuk.png)

Lora sneak previews
>>17587 >Are you sure it's the style lora causing the issue? It is because no other style LoRAs I have cause that. I train without specifying the VAE too so it uses the one in the NAI checkpoint (the stock SD one like you mention), and I've never had this issue. What >>17586 mentioned makes sense to me. Semi-related, I ended up going down a rabbit hole of color correction stuff with LAB only to realize the built-in "Color Enhance" utility in GIMP is actually using some fancy LCHab color space that I can't seem to find a simple way to convert to (I'd like to do the conversion outside of GIMP). It's the best one-click color correction I've found that doesn't blow out the rest of the image. Picrel is the first pic from >>17556 with it applied. Figured I'd mention incase someone else wants to look into it. https://gitlab.gnome.org/GNOME/gegl/-/blob/master/operations/common/color-enhance.c#L93
(2.37 MB 1024x1536 ComfyUI_15737_.png)

https://twitter.com/andst7/status/1666488580980498434 good stuff, but hopefully, under the hood, it doesn't apply to loli hentai and dolls... given that we live in idiocracy in 2023, it wouldn't surprise me if it did
>>17589 nvm, I managed to figure out the color enhance thing, turned out I just needed to find the right library. I'll post some sort of extension in a bit.
This is probably a stupid question, but sometimes I see people saying you need cuda toolkit installed for sd to work, but stuff just worked for me out of the box and I never installed it. So what's the deal with cuda toolkit? I'm using rtx3060.
>>17593 What model did you use for the example Reimu?
>>17597 I'm waiting for you to put a wedding dress and ring on her.
>>17584 >i love how you didn't even address the first part of my post because you know i'm right No, I know you're a blind faggot. >additionally, i also don't get pissy and throw a tantrum when someone says i'm wrong, so ditto At least try to hide the fact that you're projecting.
>>17597 Thank you. I'm enjoying the Yoshimon style lately, but only use it at 0.5. Cute gens as usual
>>17598 soon™
>>17595 Same model as the anon in >>17556 since I didn't bother to swap it lel (MeinaHentai v3) https://files.catbox.moe/4oqfd7.png
(3.87 MB 1536x2048 tmp50ws2c4z.png)

(3.51 MB 1536x2048 tmpm3o6w78f.png)

(3.69 MB 1536x2048 tmpcs6a_42b.png)

(3.62 MB 1536x2048 tmp291r61oc.png)

(9.44 KB 480x360 hqdefault.jpg)

>>17599 >projecting
(3.13 MB 1536x2048 tmp_ycxar_o.png)

(3.69 MB 1536x2048 tmp8gol70pn.png)

(3.76 MB 1536x2048 tmp37lj0p8s.png)

(3.42 MB 1536x2048 tmprv191i0b.png)

>>17603 Thanks, gotta try it then, hope it doesn't suck with loras
>generate image with AI >use depth extension to make it 3D >transfer to VR headset (or 3ds if you're THAT desperate) >3D AI porn also works with existing images surprised at how good the 3ds 3d is, it's pretty lowres compared to VR but still acceptable
I fucking hate trannies, especially snarky tech trannies. I've been trying to find ANYONE with the same AM5/ASUS CPU/MB combo I have to ask them if their SoC voltages are still spiking on the latest BIOS and all I get are snarky tech trannies that either try to gaslight you into believing there were never any issues to begin with or just reply with >"lol shouldn't have bought ANUS" and offer ZERO fucking help. I even resorted to fucking r*ddit of all things and got NOTHING. But a post asking "hurrdurr how do you sell used parts?" gets 81 replies in a day. I'm tempted to just build and fucking PRAY that the entire thing explodes and starts a fire that burns down the entire condo. God fucking damn.
woops wrong res
>>17598 Bruh, I hate you now. Yes there will be a smiling and happy tears version.
(3.91 MB 1536x2048 tmplj58cd4k.png)

>>17612 I believe in you maidanon! Make all three of them happy! Maybe even at the same time?
>>17614 Regional prompting time I guess.
(1.87 MB 1024x1536 pain.png)

>>17614 More pain.
(2.45 MB 1280x1920 catbox_jzk3kp.png)

>>17609 >>17611 Nice Rins >>17613 Do you mind catboxing any of these?
(2.25 MB 1024x1536 catbox_hmwc8a.png)

(2.17 MB 1024x1536 catbox_ys1t2s.png)

(2.12 MB 1024x1536 catbox_cd9r4z.png)

(2.09 MB 1024x1536 catbox_drys5u.png)

>>17617 https://files.catbox.moe/ia099k.png https://files.catbox.moe/oi9eix.png the dishwasher lora seems pretty hard-to-find so i'll reup (it's different from the one on civitai) https://files.catbox.moe/mc3g7h.safetensors
(2.58 MB 1280x1920 catbox_190h3q.png)

(2.28 MB 1280x1600 catbox_084qo9.png)

(1.96 MB 1280x1600 catbox_pgsxww.png)

(811.23 KB 640x960 catbox_bsglnp.png)

(2.50 MB 1280x1920 catbox_sm4nae.png)

>>17618 >>17619 I'm getting kinda fried results if I'm using SDE Karras sampler and 84k vae. Seems like yume nikki lora was trained with nai vae? Pretty hard to read the catboxes because of comfyui stuff
(1.32 MB 1024x1280 00704-2602895025.png)

>>17623 Forgot to attach the pic.
New controlnet inpaint preprocessor that uses lama on the masked area first before generating. https://github.com/Mikubill/sd-webui-controlnet/discussions/1597 I guess you could use this as a built in lama cleaner if you only use the preprocessor? >>17623 I dunno, I always found all the non nai vaes pretty fried looking compared to nai.
(1.17 MB 1024x1280 00709-1867165892.png)

>>17625 I think training with vai selected is not a great practice? 84k seems to work fine with absolute most of loras/models. Seems to be less fried with reduced unet/cfg and euler a sampler but still the eyes are a bit weird. I don't like using nai vae because of its tensor with NaNs shit. Thanks for posting lora anyway though.
>>17626 >I think training with vai selected is not a great practice? I think so, but I remember not liking other vaes even before loras were a thing. The lines always looked slightly blurrier to me.
>>17626 >>17627 Oh btw, I'm not the anon who posted that lora.
Any well-made artist loras with great legwear texture/detail?
>>17627 I don't want to mess with its bugginess even if looks slightly better (I don't think it does tbh) >>17628 Oh, alright.
>>17624 >>17626 Yeah, that anon is training with the NAI VAE. I ran into the same issue in >>17576. Once I figured out how to implement GIMP's color enhance in >>17593 though I don't really care which VAE I use. In fact the NAI VAE is probably better to use because that's specifically finetuned on whatever shit NAI did to finetune specifically for all of the Danbooru art, just as SD's VAEs are specifically targeted towards (mostly) fixing faces. If you compare the NAI VAE and SD VAE while using my color enhance script at full strength that should give you an idea of what the VAEs are _actually_ doing.
>>17631 Oh I see, I didn't pay much attention when reading those posts before because wasn't that interested in the character. >In fact the NAI VAE is probably better to use because that's specifically finetuned on whatever shit NAI did to finetune specifically for all of the Danbooru art, just as SD's VAEs are specifically targeted towards (mostly) fixing faces. If you compare the NAI VAE and SD VAE while using my color enhance script at full strength that should give you an idea of what the VAEs are _actually_ doing. I don't like its washed out colors and the worst part is the failed gens because of the tensors with NaN bug. I like how 84k looks most of the time, the problem only ever occured with this lora and that civit Takeuchi lora.
>>17631 >>17632 Isn't "blessedvae" the NAI one but without the bug?
>>17632 NaN bug is a fair point. The problem is "rare" because almost everyone has been conditioned to not specify the VAE in training and so they all use the early v1 or whatever variation of the SD VAE, which is almost never replaced in distributed checkpoints/mixes and just lives there. I don't fully understand why colors are so problematic with VAEs but what I do know is the concern for VAEs should moreso be for accuracy of everything besides color, since it's whole purpose is for translating the tiny latent space into a bitmap image. The training of a VAE basically just involves converting to latent space and training how to convert that back to the original image. How color somehow got fucked up in that process for some VAEs I don't quite understand. This isn't entirely related, but if you look at how something like huggingface diffusers works, models are supposed to be distributed in separate parts (CLIP, UNet, VAE, etc), but voldy just so happened to create the webui before diffusers supported SD, and since that's what essentially all casual users use, we're left in this limbo where models always include a VAE when really only a few exist currently. diffusers even went out of their way to support LoRAs trained with kohya's trainer because almost nobody in the community uses diffusers' own LoRA trainer, lmao. https://github.com/huggingface/diffusers/pull/3437 >>17633 "Blessed VAE" is just some color correction applied to the out/last layer of the VAE. It's literally the same as if you were to do the color correction yourself on the final image. It's actually kind of a bad idea to use because you have to correct it in both directions if you're using a two-pass workflow like hires fix in the webui, so you end up getting a compressed color range and possibly banding and other shit. https://github.com/sALTaccount/VAE-BlessUp#how-it-works
>>17634 So there's no patched/unfucked NAI VAE?
(5.82 MB 2048x3072 tmp2akmdguk.png)

>>17616 No pain just happy tears
>>17631 >>17623 >>17626 dunno if it's a good practice or not, i just trained without a VAE one time and the results looked washed out with any VAE compared to training with NAI's. and i also don't like using VAEs besides NAI's in general because it makes everything too vivid i need to redo this test with the same dataset and tags to confirm but the center two columns were trained with the NAI VAE and the one on the right was trained without a VAE even if it's better with manual color correction i don't see why that has to be a necessary step, my impression is if the result image is washed out it's specifically a VAE issue ...or I could just be a retard who needs educating in some other way besides changing just the VAE setting
>>17637 Unless someone better versed in ML can give an explicit reason why you should/shouldn't specify a better VAE for training then I don't think what you're doing is wrong. And I still am in agreement with you the NAI VAE is the best choice for accuracy. But wouldn't you also say though the first image in row 2 should be more saturated like the first image in row 4? Even if you don't agree, that's the annoyance many people seem to have (including me) which is why I bothered even making that color enhance script a few days ago. At least now I can get the best of both worlds and choose a VAE based on everything else but color saturation (i.e., Stability finetuned and released theirs to fix human faces, https://huggingface.co/stabilityai/sd-vae-ft-mse-original#decoder-finetuning)
>>17636 haha yes, happy tears..
(1.70 MB 1024x1536 tmpk1jgy1bm.png)

>>17639 of course! What is a better reason for happy tears than to marry the person you love?
I was annoyed I didn't really know the difference for this so I looked into it a bit more, maybe it will help someone. Apparently sd-scripts will use the "Dreambooth method" instead of normal finetuning when using caption .txt files, or in other words not inputting a prepared .json. https://github.com/kohya-ss/sd-scripts/blob/6417f5d7c183eccd79422f28804f7d7c507e05b9/train_network.py#L85 However, when I looked at the code, everything Dreambooth related seemed to only be related to datasets. These were the only pertinent things I could find for it, but they only concern regularization images, something that only applies to Dreambooth and not finetuning. https://github.com/kohya-ss/sd-scripts/blob/6417f5d7c183eccd79422f28804f7d7c507e05b9/library/train_util.py#L920 https://github.com/kohya-ss/sd-scripts/blob/6417f5d7c183eccd79422f28804f7d7c507e05b9/library/train_util.py#L2279-L2283 After looking at the Dreambooth paper, I've came to realize this is all the "Dreambooth method" actually is. It's accounting for prior preservation loss via regularization images, and making use of rare tokens in captions (see sections 3.2 and 3.3). https://arxiv.org/pdf/2208.12242.pdf So, back when a lot of us were attempting to train models before LoRA, my assumption of what was happening was because so many people simply said "don't worry about regularization images", we ultimately were just doing normal finetuning, but with a rare token, and depending on however the trainer was set up that might have messed up results if it didn't properly account for no regularization images being input. I'm guessing this is partially a reason why kohya's trainer become so widely used because it (to my knowledge after looking at the code) accounts for when no regularization images are input, and falls back to normal finetuning. So for the past months now essentially nobody has been using Dreambooth at all, which is not a bad thing, but that means there really isn't an excuse for "nobody is finetuning now" when we actually have been this entire time. Thank you for coming to my blog. :^)
>>17641 sorry for suggesting pregnant bellies and weddings maidanon
>>17642 One minor footnote to this, apparently it wasn't always the case it fell back to finetuning, and you had to input a .json, according to kohya: https://github.com/kohya-ss/sd-scripts/discussions/23#discussioncomment-4553235 As far as I can tell, the refactor mentioned was done, so normal finetuning is used, but I think I'm going to ask in that thread just for clarification.
>>17643 No worries, it will be used as motivation.
>>17641 >Make anime real Even if there's a lot of things wrong with it, when I first saw Apple Vision revealed my autism immediately went to this thought, we're one step closer
(1.71 MB 1024x1536 tmpzoege9f4.png)

>>17641 Just wait a bit more With the advancement in AI I believe that jumps in technology could become faster than before Ergo some VR project will be able to pull full dive off in the near/far future
>>17608 I don't have any of these, but how does that work? Is it a fake 3D like those layered cards that change what's in them when viewed at different angles? Do you load these in some 3D software and view them there or can you actually load an image + depth map into the headset itself?
>>17648 Using some weird math, you can get a stereo image from an image + depthmap (which you can get using the extension too, numerous models available). Then you just open the side-by-side 3D image in whatever way you can depending on your headset. Don't do images that are too complicated since that process kinda fucks with edges. Also avoid images that are too wide/tall cause you get horrible results. The limiting factor most of the time is the depth model, though that's mostly because I don't try it on photorealistic stuff so it's harder for it to make accurate depthmaps.
>>17650 Yep, that's the one
>>17650 >>17651 Couldn't you do that with controlnet? They've added a new preprocessor for depth like a month ago that seems to be able to do details better. https://github.com/Mikubill/sd-webui-controlnet/discussions/1141 Or is this one better?
>>17652 yeah controlnet has a lot of the same preprocessors. MiDaS, ZoeDepth, and LeRes (which the depthmap script extension does not support from what I can see looking at the repo). there's a similar case where there's an extension utilizing a model trained for background removal in anime illustrations specifically (https://github.com/KutsuyaYuki/ABG_extension), but now that model and several others can all be utilized in another extension that actually is a bit more performant anyway (https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg)
(3.02 MB 1280x1920 00271-1136787854_depth.png)

(3.44 MB 2560x1920 catbox_byy5jj.png)

>>17653 Yep they use the same zoedepth but they don't share the same folders you waste a lot of space using both
>>17652 Where would controlnet even be used? We're talking about generating a 3d image from a 2d image + depthmap, which is pretty easy to compute. Also, the controlnet preprocessor for depth lagged behind what the depthmap generation part of this extension could do for a long time, though the gap is indeed narrower now.
>>17655 I only meant the preprocessor, but I see.
>>17602 damn son, catbox?
>>17610 try asking perplexity. it's better than search engines a lot of times
(1.26 MB 1024x1152 54554-AI.png)

(1.19 MB 1024x1152 54570-AI.png)

Had a client ask for loli taihou, may as well post it here https://mega.nz/file/fMQzjbBa#OVKNMZM7fI5GZwDASMVghgxMet0e53_vEuOGnJyjlfI
>>17659 Did they ask for the lora or did you just gen a few pictures for them? I always wonder who would pay for these but there is seemingly a market for this?
>>17660 I dont like basically scamming people to pay for images Just added a kofi to my civit page and started taking specific character requests with a 5 buck dono
>>17661 sounds fair Still better than 'sub to my patreon for more' on 90% shit lora/model pages
>>17662 I'm sure it a far more profitable method as what I do really only caters to a few repeat customers, but feels better to know I'm giving people the tools to help themselves to this bounty Here's all ive done after about 3 months
>>17658 >perplexity it found a thread on jewnustroontips that's half helpful, same cpu but not the same mb, it's a cheper one after some wrangling it sent me to a post on the official amd board that's extremely worrying lol
>>17664 >same cpu >same board >latest bios >no expo lol i want to die
I have been away for about two months. Are you now using lora built-in rather than through an extension? Which is better?
>>17666 I don't think one is better over the other, just use whatever you like better. I use built-in, because of the convenience of typing the lora name in the prompt with the help of the tag autocomplete extension or using the gallery instead of trying to find it in a long list of loras in the dropdown. But with the extension, you can separate unet and tenc weights, which can be handy sometimes plus you can use it to check lora metadata.
(1.96 MB 1024x1536 00260-2742507427.png)

(2.16 MB 1024x1536 00245-4290182689.png)

(1.97 MB 1024x1536 00015-1585226733.png)

(2.02 MB 1024x1536 00016-1166940747.png)

(1.96 MB 1024x1536 00098-3233572772.png)

(2.30 MB 1024x1536 00064-1223189826.png)

(1.96 MB 1024x1536 00254-919744664.png)

(2.14 MB 1024x1536 00020-922686308.png)

(1.47 MB 1024x1536 catbox_61crpl.png)

(1.79 MB 1024x1536 catbox_q0jbui.png)

(1.76 MB 1024x1792 catbox_96dl7u.png)

(1.64 MB 1024x1536 catbox_s4ma19.png)

finally had enough time to spend not developing the scripts to make a new lora, this one is of kobato from haganai. the training settings for it were absolutely fucked up, but it's somehow very consistent in everything except for the dress. but the dress is problematic in two ways, 1: the way I trained, and 2: the dataset itself is inconsistent, and I mean wildly inconsistent beyond that, works pretty well, way better than it should all things considered, and it works pretty consistently on a large variety of models. readme on civitai is not complete because of the 3 different times I mention loli in it, but the model itself has metadata and an embedded image that you can look at through additional networks. the mega drive and pixai.art has the complete readme though. links: https://civitai.com/models/87226/hasegawa-kobato https://pixai.art/model/1624124841812486099 https://mega.nz/folder/CEwWDADZ#qzQPU8zj7Bf_j3sp_UeiqQ/folder/6AgSTDKK
VAE training test fullsize: https://files.catbox.moe/1giz77.png i think it's less washed out with novae trained + animefinal VAE than i originally thought, but personally i still like the colors with NAI's VAE trained in slightly better (not by much in this case though) and the non-NAI VAEs still kinda hurt my eyes, at least with my monitor's calibration i think i'll at most train with NAI and also add a novae for a comparison in case it turns out too vivid
>>17675 Could you post the lora with the same training params as >>17619 but without VAE please? I wonder how it's gona look with 84k
>>17676 sure, i'll toss in the others too yume_nikki1_novae.safetensors https://files.catbox.moe/b3nimv.safetensors yume_nikki1_mse.safetensors https://files.catbox.moe/ucr0rw.safetensors yume_nikki1_ema.safetensors https://files.catbox.moe/7gfrll.safetensors
>>17677 thank you I'll test it later but pretty sure novae is gonna work well on 84k NAI vae would be fine to me if it wasn't buggy and I don't want to use no-half-vae
(5.57 MB 2048x2048 51.png)

>>17679 >"anon" watermarks his shit >posts it on an image board here's your one and only (you), now crawl back to 4feds /hdg/
>>17678 personally i haven't had any NaN issues/black images since i switched to ComfyUI and upgraded xformers to 0.20.0 regardless of VAE settings at the same time i can't recreate some of my old webui gens to be close enough to the originals with ComfyUI so there's tradeoffs i guess
>>17680 yeah, it's pretty funny, they aren't getting any praise here. it'd be great if they'd just go away as a whole though.
anyone know where to find these hentai beastiality loras or similar ones? fuckers uploaded the model but made it unavailable for download https://pixai.art/model/1613565462168631521 https://pixai.art/model/1613545027671903249
>>17683 best i can do is a horse sex lora i saved from maybe /trash/ ages ago https://pixeldrain.com/u/GLQZRVL2
>>17672 got a catbox? like the panties and transparent dress
>>17675 >trained with no vae still the best cool. glad we've reestablished this
>>17683 Second one is shamelessly stolen from here. https://www.pixiv.net/en/artworks/105399323
>>17680 i'm honestly wondering why he's posting here. my guess is he either got laughed out of 4feds, got banned and is too retarded to get around it, or is a mad schizo
(501.17 KB 512x768 catbox_se9sxs.png)

(1.40 MB 960x1440 catbox_3boxqi.png)

>>17685 Here's the catbox that's probably most useful for you, as it contains the original seed before upscaling and stuff. Also added one without loras that should be easier to replicate. Panties are a lucky gen, I suppose. Maybe adding "highleg panties" to the prompt will increase the chances.
>>17672 >Looks cute, appreciate your work ty. straw hats are based
>>17689 That first and third one are really cute. May I have a catbox? >>17693 was inspired by maidanons straw hats. BTW, did you ever / do you still take suggestions for artists? I'm slowly accumulating a 40hara dataset
>>17693 >straw hats are based strawchads keep winning
>>17694 >BTW, did you ever / do you still take suggestions for artists? always looking for more artists to make loras of. it's getting harder and harder to find more interesting ones though
>>17696 notboogie :)
>>17696 Well, 40hara / Shimahara surely has my vote. Let me know if a folder of good quality images would be of any help. Or if there's any other way to procure and prune a dataset in advance. Heck, I should probably upgrade my gpu and learn how to train loras already
>>17681 >i can't recreate some of my old webui gens to be close enough to the originals with ComfyUI Using these sets of custom nodes make webui results reproducible for me. Obviously for seed breaking stuff in the webui you'd be having to intentionally break Comfy. Feel like you may already know of these but posting them anyway https://github.com/BlenderNeko/ComfyUI_Noise https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb
>>17699 Still appreciate it!
>>17687 >>17684 nice, much appreciated
(1.78 MB 1280x1024 00803-3519922984.png)

(1.94 MB 1280x1024 00802-3519922983.png)

(2.26 MB 1280x1024 00804-3519922985.png)

(1.24 MB 1024x1280 00799-3312449410.png)

>>17681 >>17677 >>17675 Thanks for bothering with testing and posting novae version, really cool. Mostly wanted to prompt trippy shit with her though. >personally i haven't had any NaN issues/black images since i switched to ComfyUI and upgraded xformers to 0.20.0 regardless of VAE settings Dunno, maybe comfyui has no-half-vae by default? This setting made vram usage higher for me last time I tried it. Not feeling like switching to comfy still while I still can get by with auto.
>>17699 Can you, please, share this kantoku lora? I just can't find that particular version of it on the net.
>>17565 I couldn't install ComfyBox for the life of me
>>17705 There's a nightly link you download and run: https://nightly.link/space-nuko/ComfyBox/workflows/build-and-publish/master/ComfyBox-dist If you want to build it yourself it can't be done under Windows since there's submodules with build scripts that are shell scripts only
>>17704 updated https://mega.nz/folder/XywGnaBI#z51WHmX_S3OxbXzRBJfM7A i feel like it has a higher chance of creating double rib chage and double hip bones than no LoRA due to using cropped images, but maybe not. works well enough for me.
(1.01 MB 1024x1024 00026-2711460395.png)

(1.37 MB 1024x1024 00020-3179535208.png)

>>17691 based railgun coomer
(1.83 MB 1024x1536 00553-4154094178.png)

(1.46 MB 1024x1536 00580-1799858461.png)

>>17707 Thanks! And what is the best model for this lora?
>>17707 Thanks! And what is the best base model for that lora?
>>17710 dunno. seems to work fine with aom mixes and based64
>>17706 i'm a windows dev, it's just that the build scripts require git bash to run. should probably clarify in the readme.
>>17696 not sure if any of those would be appealing to you but still gonna dump a bit of artists jack dempa sky-freedom noripachi 40hara ulrich_(tagaragakuin) miyo_(ranthath) oosawara sadao hella p sanuki_(kyoudashya) Probably too generic-looking for you but might as well suggest them if I'm not baking them soon
anyone reupload the lora Pumpkinspice ,pls
>>17629 bump
(1.73 MB 1280x1600 00091-1235568989.png)

Working on my own merge, thoughts anons? I would rate myself a 6.5/10 for knowledge I haven't even trained a Lora ngl so there could be lots of room for improvement. This image is completely gacha, no loras or upscaling. Can provide more if there is interest.
>>17720 I rate it a UOOOOOOOOOOOOOOOOH BREEEEEEEEEEEEEEEEEEEED out of 10.
>>17720 Are you the one who was making that soft 3d merge?
(909.46 KB 3000x4000 catbox_9tzocq.jpg)

>>17718 Very nice thanks anon
(2.36 MB 1552x2048 tmpcda1h3pf.png)

(2.31 MB 1536x1920 231.png)

(3.01 MB 1576x2048 13798.png)

(1.87 MB 1408x1792 tmpp08ymux9.png)

Ooh I haven't been on 8Chan but someone mentioned my SOFT3D model? I forgot to upload it here but its done. https://huggingface.co/Hosioka/Baka-Diffusion
(2.71 MB 2250x1500 LoRALycoris Preview[S3D].png)

>>17725 So how'd you fix the noses in the end?
(5.50 MB 2048x2048 14.png)

(214.45 KB 1074x1869 image (1).png)

>>17727 Weakening the concept "Realistic" from both cross attention and self attention layers only on the OUTPUT layers. its not totally gone but it doesn't appear in regular prompts anymore. Yes I've tried MBW to fix the BASE model and it killed the composition and coherency of the model which I specifically built the way it is. MBW failed. private fork I have of ERASING is what did the job Finally it kept the coherency of the model I was aiming for aswell as the composition. It's quite hard to put in words but the composition and hand drawing in this model is better than most anime models if not all of them. Original Erasing repo https://github.com/rohitgandikota/erasing
>>17729 Funny thing about Erasing. after weakening the concept "Realistic" it became better at drawing stuff like eyes. which I think was being conflicted with the realism before weakening the concept Here are 2 unreleased versions of the [S3D] model which I experimented Erasing from just only the cross attention layer. now that realisitc is weaker it somehow results in the model leaning towards anime more and drawing fine details better while keeping the Soft3D. yes noses are still present.(these will not be released its just an experiment)
>>17674 Oh~~~ Catbox please.
>>17730 gonna be honest with you, i like the one on the left way more, nose and all
>>17732 ye left is pretty good too!
>>17733 let me know if you ever plan on releasing the older pre-erasing version, i quite like it
>>17734 eh I'll think about it man
>>17714 some of these are cool. i'll probably bake at least one of them after i finish the one i'm working on now
>>17725 >negative : nsfw Seems gay.
>>17737 Don't worry about it anon. I actually generate more cunny than you do in my free time. - baka man
>>17738 bold claim
>>17738 >I actually generate more cunny than you do in my free time. post it here or go back
>>17738 >I actually generate more cunny than you do in my free time. this is why all the images he posted aren't even remotely nsfw
(1.10 MB 864x1304 00147-1895978864.png)

(1.36 MB 864x1304 00173-3091377416.png)

(1.25 MB 864x1304 00084-157331404.png)

(1.47 MB 864x1304 00201-2628911394.png)

made a bunch of oppai loli stuff today
>>17741 frfr HAHAHHAHA
>>17725 >>17726 how does it look with 3d in negs
(5.72 MB 2048x2048 16.png)

>>17745 Fuck off already you retarded cringelord
>>17744 It's a war crime so I don't do it
great more schizo model makers, keep your shit to be shilled on civitai
>>17748 LOL :]
(3.20 MB 1536x1920 14302-1555365769.png)

>>17718 good
nvidia bros, stay on 531.79 or rollback to it if you have updated https://github.com/vladmandic/automatic/discussions/1285
Any Lora exists for the artist abutomato exist? If not, what would be the minimum specs a pc would need and how long would it take to make one from scratch?
Haven't had the time to keep up with things so its old tech. Hifumi lora https://files.catbox.moe/ft41w0.zip https://anonfiles.com/z6h638wfz8/AjitaniHifumi_t87_zip
>>17752 Not that I know of but I could do one tomorrow assuming I am able to get the dataset downloaded
Why do people add activation tags to style loras?
>>17752 something quick and dirty before >>17754 can show off his try https://anonfiles.com/mc1391w2z9/abutomato_safetensors
(2.43 MB 1280x1600 catbox_axw8lv.png)

>>17756 Good timing, wanted to try a few more variations before uploading
>>17755 In some cases I presume it's a holdover from the EARLY EARLY days when additional networks were harder to add/remove on the fly. In most cases I presume it's just idiocy.
>>17759 >In most cases I presume it's just idiocy. That's what I usually think as well, but it's an otherwise reasonably well made locon from someone who I think posted on this board.
(1.80 MB 1280x1600 catbox_aehzb3.png)

(2.81 MB 1280x1920 catbox_7valdq.png)

(2.23 MB 1280x1600 catbox_9hr6uf.png)

Friendship ended with cosine now linear is my best friend
Any tips about controlnet tile and tiled diffusion? I'm still a bit confused about what exactly each is supposed to do and when to use them plus all the other options related to them. Tried them a bit and this is what I understand about them: Controlnet tile: adds detail like latent upscaling, but actually is able to preserve the composition, which is also good for tiling scripts. Doesn't actually tile anything. Tiled diffusion: for upscaling by splitting the image into smaller tiles, so I don't run out of vram. Tiled VAE: just to prevent oom when encoding/decoding at the start and end? Noise inversion: no idea what this is. It seems to smooth the result a lot in a very ai way, kinda makes the result look like default nai. There's a lot of sliders I just don't understand. What's the difference between multidiffusion and mixture of diffusers? They seem to look about the same. Supposedly you can combine CN tile and noise inversion to keep the composition from CN while NI cancels the latent upscalelook or something, but I haven't got good results with them and it takes too much time to bake.
>>17762 From what I know tiled VAE also helps prevent OOM by decoding the image in smaller parts like how tiled diffusion works for sampling, unfortunately the webui extension was incompatible with dynamic prompts and the authors stated it would be impossible to add support with webui's architecture so I gave up using it
(6.99 MB 2048x2560 14322-1855691925.png)

(3.15 MB 1536x1920 14375-2036043938.png)

(1.47 MB 1024x1280 14386-2284547508.png)

>>17762 i gave up and just generate 2 images one with 0.05 denoising strength and one with >0.3 and just overlay them and erase the area where the anime is supposed to be for detailed backgrounds. CN tile has a color offset problem and can cause oversaturated spots all over and noise inversion requires like 200 steps to be useful so i decided not to use either. CN tile + NI allows NI with less steps but the color problem still persists.
(2.01 MB 1280x1600 catbox_nycxe1.png)

(2.12 MB 1280x1600 catbox_xzyq9u.png)

Linear vs cosine with restarts this time, wins once again
(2.12 MB 1280x1600 catbox_2bi8l2.png)

(2.21 MB 1280x1600 catbox_6p0o74.png)

(2.17 MB 1280x1600 catbox_lm95p0.png)

Alright done fucking around here's the Abutomato lora https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/SE9g3K6A I suggest pairing it with the Madoka anime lora at low weight if you want to prompt characters other than Mami and Nagisa
(2.12 MB 1280x1920 catbox_fvpnlp.png)

(2.36 MB 1280x1920 catbox_enlqil.png)

(2.41 MB 1280x1920 catbox_632sy7.png)

(2.29 MB 1280x1920 catbox_8lk6sg.png)

>>17764 just take a break and let your brain and dick heal
>>17766 Are you doing that manually in an image editor? Now I wonder if CN tile also works with inpainting, where you could gen the whole pic to high res with lower denoise, then enhance the background using CN tile at that res. >CN tile has a color offset problem and can cause oversaturated spots all over Haven't seen it do that except make the result desaturated. Maybe that's only in this case though, haven't really tried doing more. There's also the tile_colorfix preprocessor that seems to retain the saturation, but the image got slightly less detailed. Have you tried that? Do you still use tiled diffusion to make high res images or does your gpu just allow you to gen these in one go?
>fags and gay furries shitting up my CivitAI feed with their overbaked gay models even when I blocked those words associated gdi gatekeeping is a good thing after all, tag your shit correctly
>>17774 >CivitAI feed reap what you sow
(1.71 MB 1024x1536 00316-1811641760.png)

(1.64 MB 1024x1536 00146-1383836701.png)

(1.69 MB 1024x1536 00062-154587651.png)

(1.79 MB 1024x1536 00307-4250918775.png)

(1.54 MB 1024x1536 00088-121953258.png)

(1.77 MB 1024x1536 00009-3502518066.png)

(1.74 MB 1024x1536 00205-3042200965.png)

(2.03 MB 1024x1536 00089-2781144226.png)

(2.84 MB 1280x1920 00099-2138142020.png)

(2.76 MB 1280x1920 00095-2138142017.png)

(2.87 MB 1280x1920 00092-2138142014.png)

(3.10 MB 1280x1920 00090-2138142012.png)

arms behind head is cathartic every once in awhile
>>17772 dick is fine, brain probably not. >>17776 neat. >>17778 It is and I'm not even a pitsfag, liking outstretched arms/hands right now though.
>>17773 i use tiled diffusion and tiled VAE as i'm only running 8gb, but without any of the additional things. photoshop to stitch two images together. tiled diffusion is way more coherent compared to SD upscale or ultimate SD upscale scripts. >tile_colorfix neat, i'm gonna give it a try. hadn't updated in a while.
>>17778 what style is that? looks vaguely familiar
>>17774 >"why are there fags and furries in my reddit for AI feed?"
>>17783 >Wow! Water is wet! you jumped into a pool and got wet! yeah no shit sherlock
(19.72 KB 322x290 e40.png)

>>17784 someone's upset about getting outed as a civitai fag huh
>>17785 Yeah I got outed as a CivitAI fag on an ANONYMOUS image board, my life is basically over. yeah im upset, whatever helps you sleep at night buddy.
>>17775 god forbid anybody uses the only service of its kind
>>17787 >yeah im upset according to >>17788 and >>17774 yeah, you're definitely upset
(5.43 MB 2560x2048 14439-1131665337.png)

>>17773 tried playing around with it some more and i still think using minimal denoising strength just to smooth out some of the artifacts from the gan upscale gives me the best results. not really the best way if you actually want to add something extra.
>>17767 I find that linear is just worse usually, might be how we train things.
hey all, I didn't really mention some of the smaller updates, but I just pushed a new update to the lora easy training scripts, so I might as well mention it. this update was smaller, mainly bug fixes, but still. You can update using the update.bat, or follow the link below to get install instructions https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
what is the difference between the 8bit/non-8bit version of AdamW?
>>17793 vram usage mainly. it's also faster as far as I remember
>>17746 kill your family and then kill yourself
(8.22 MB 2048x2915 28.png)

(7.96 MB 2048x2915 30.png)

>>17746 suck the shit from my anus after you tongue it
>>17756 >>17769 Thank you, going to see what I can make with them.
Are there any specfic Loras/checkpoints for generating loli?
>>17798 It's a spic.
>>17801 After seeing "suck the shit from my anus after you tongue it" I'd have placed a bet on brazilian
>>17802 Well I actually checked his trash pixiv kek
>>17521 this is ai art you stupid bitch
>>17804 >artlet, resorts to ai >ailet, can't even fix glaring issues for his grift no wonder no one is paying you
>>17696 another suggestion pochi_(pochi-goya) surprised no one did lora for him, the hotaru doujins are amazing
>>17804 no shit sherlock just because it is made with ai doesn't give you a pass to post aggressive replies I mean did you really think anyone here would suck you off because you posted a pic with a watermark? Where do you think you are?
>>17804 AI maybe art surely not this is actual fried garbage, you'd get better results getting prompts from /d/ lmao
Is Anylora better for training than NAI now?
>>17809 no it's a meme
>>17809 don't fall for that meme, maybe it's a good checkpoint for testing out LORAs but otherwise it's just another Civitai user making retarded claims
damn though you guys would have made a Jack the Ripper lora, but ctrl+f-ing though the threads there doesn't seem to be one and the one on civit is old with only one tag + 100 repeats
seems Civitai's moderation is now using AI interrogation to determine if images are of "minors", and automatically removing failures with no option for appeal kek
>>17814 That's funny.
>>17814 does this only work for NSFW? Because if not kek, it seems like so much of the public wants to ignore the fact that it's usually kikes who want to touch real kids
(1.24 MB 1024x1536 00140-1394962231.png)

(1.28 MB 1024x1536 00139-1394962230.png)

(1.25 MB 1024x1536 00137-3614897982.png)

>>17816 If it wasn't only nsfw they'd probably delete all previous loli loras? Which is not the case.
>>17813 no outfits or activation tokens but it should work fine https://anonfiles.com/c5KfU1wcz8/jack_the_ripper_fate_safetensors
(3.15 MB 2048x2560 14490-2006184039.jpg)

(3.15 MB 1536x1920 14554-2677014783.png)

(2.13 MB 2048x2560 14599-1040995009.jpg)

What's a good average time to generate images on webui? It takes me about 40 secs to generate 4 images at 512x768 wiith a rtx 2070.
>>17790 So you basically gen a pic at low res, then upscale in img2img twice at different denoise levels, combine them in ps and then do tiled upscale with low denoise? Or do you gen with hires fix and then do tiled upscale twice, which you then combine in ps? Also did you do any other edits to this pic like inpainting or use other tools like openpose etc.? It looks really nice. Mind posting a catbox? >>17819 These too, I really like the first one.
>>17822 Damn, I never managed to make tits THAT flat.
>>17823 Well she wasn't meant to be completely flat, the prompt has [flat chest|small breasts] but I'm not gonna complain about it.
>>17824 >Well she wasn't meant to be completely flat Is this why she's angry?
>>17825 Probably, all her fat went to the hips and thighs.
(176.36 KB 512x640 breast grab.png)

(650.80 KB 1024x1280 magic.png)

(171.21 KB 512x640 armsup.png)

>>17821 https://files.catbox.moe/q97wda.png https://files.catbox.moe/91hae1.png https://files.catbox.moe/1bea6c.png https://files.catbox.moe/wm4a8m.png https://files.catbox.moe/8m8xwx.png >hires. fix >photoshop and inpaint missing and wonky stuff >maybe img2img at same res to smooth any artifacts left by inpaiting >tiled upscale >maybe overlay the two picutres in photoshop and make a picture with a high denoising background with a low denoising character >maybe downscale back some to retain detail from the higher res >inpaint final touches (usually just the head) it's not set in stone and depends on how much i want to add detail to the background and how much i want to preserve the character without adding pretty much anything to it. out of the images here i manually overlayed and erased skin parts of yellow elf and skirt lift girl in >>17766 and whole character of rainy girl in >>17723 >>17790 and the ice fairy elf were done with CN tile colorfix enabled which seems to be somewhat incompatible with tiled diffusion? you can see the tiles if you look for them, but it does make things look more "organic" or smooth i guess? for kimono girls (nai VAE) in >>17689 and field girl in >>17690 i used normal CN tile which does add nice detail but offsets the color. others were generally just tiled diffusion + tiled VAE at 0.2 denoising strength. it adds some skin detail and removes some of the artificial upscaling look from backgrounds. i use openpose when i want to position the characters or want to do some niche shit. openpose also helps with positioning regional prompter and LoRA masks to seperate multiple specified characters. low weight canny is used when i inpaint hands to keep the bot from imagning new fingers. https://boards.4chan.org/h/thread/7430467#p7431134 https://toyxyz.gumroad.com/l/ciojz >>17822 cute flatso
(2.70 MB 1280x1920 00108-1622621725.png)

(2.58 MB 1280x1920 00106-1622621723.png)

(3.04 MB 1280x1920 00104-2778456444.png)

(2.85 MB 1280x1920 00105-1622621722.png)

>>17782 https://files.catbox.moe/q39bzh.png >>17788 let me get this straight >use website known for people being retards and uploading dogshit >come here and complain about website having retards and uploading dogshit >get upset whenever people say "told you so" what did you honestly expect to achieve doing this?
>>17828 >https://files.catbox.moe/q39bzh.png >5 style loras mixed together ugh fine i'll go hunt them down
(510.51 KB 512x768 00023-3519043689.png)

(499.46 KB 512x768 00024-3519043690.png)

(478.57 KB 512x768 00022-3519043688.png)

>>17818 Ok it really feels like I'm doing something wrong.
>>17828 >what did you honestly expect to achieve doing this? Would it be better if I had an excuse? I mean I'm open to suggestions to a better way to find stuff like this if you have one, but if you guys are going complain about me complaining then I'll just shut up and keep using the website "known for people being retards and uploading dogshit."
(6.32 MB 2048x3072 catbox_mflw52.png)

>>17830 well truth be told it was just a small test of throwing the first twenty pages of gelbooru into a lora it does it job but I should have tagged certain things better >>17831 catbox example? Maybe some bad positive or negative prompts. Resolution could be the problem too >>17832 https://gitgud.io/gayshit/makesomefuckingporn is your friend a lot of links with much better quality than civitai
>>17832 >but if you guys are going complain about me complaining then I'll just shut up and keep using the website "known for people being retards and uploading dogshit." go back
>>17827 Thanks for the tips. Not sure if I'll be doing all that, since using tiled diffusion takes forever for me and is such a gamble sometimes. Using canny at low weight for hands sounds like a good idea, since I don't really like the hands model in that toyxyz blend file. I've been also thinking about ripping some hands from other sources and using those instead, but I haven't touched blender in like a year and I'm quite rusty. Still I might just stick doing >>17185 unless the hands are completely fucked and I really like the pic. Also I tried the CN inpaint model once to fix hands and I couldn't get good results with it and the colors were somewhat mismatched. But maybe that was because I used only masked mode at 512x512 along with openpose hands, where the fingers were too close together to make a fist, so that might have confused it. Normally I just rng them at higher denoise a bit until I get something that looks like an improvement I could draw over in ps, then run it again at lower denoise with my edits a bunch of times.
>>17833 one person curating a list of loras is great and all but it's not an alternative to a sharing site
>>17836 >a curated menu is great but i'd rather have a buffet of shit
>>17837 I'd rather have both options available, for the few choice items in the buffet that wouldn't get picked up by the menu, fuckwit
>>17838 >for the few choice items in the buffet that wouldn't get picked up by the menu, fuckwit weird way to tell someone you love to sift through shit to find a few bits of undigested food to eat
(2.47 MB 1280x1616 00024-3410851925.png)

(1.77 MB 1280x1616 00025-396552634.png)

(1.85 MB 1280x1600 00027-2804616833.png)

Cooking a lora from 8 images
Did the easy scripts remove the file with all the possible args?
>>17839 yeah your shitty metaphor kinda falls apart under scrutiny doesn't it
>>17842 >under scrutiny From the civitai faggot who keeps getting bffo'd? lol
Does anyone know if the Regional Prompter works together with Ultimate SD upscale and/or controlnet tile? I'm trying to upscale an image made with regional prompter beyond what my Vram can usually handle.
Has anyone made loras of the foxes? Like the pink or blonde one? Asking because I remember there was a lot of BA anons here
medium amount of trolling
https://civitai.com/models/65214/age-slider Someone posted this on hdg and it seems to work? Age slider thing that lolifies everything without having to use like 4 tags for loli, short, petite, etc. Pic is a random example after hitting a nikke character with the loli beam for example
>>17845 Damn, those fellatio ones are amazing. Did you use any special lora for those? >>17852 How long before those get removed? Also which one did you use?
>>17852 >tfw never use .pt files
>>17674 Catbox please.
>>17852 >pickle scan these and the whole embedding folder just in case >scanner crashes at bad-image-v2-39000.pt Should I be worried? >>17855 Didn't know about the first one, thanks.
>>17817 catbox?
(1.94 MB 1024x1536 00009-1213418324_cleanup.png)

>>17858 https://files.catbox.moe/k7ttdp.png Same prompt for others Had to reduce zankuro tenc because it kept fucking twintails up
>>17852 what's the point if it just replaces couple tags?
>>17853 I used all of the V2 ones to try them out. >>17861 >what's the point if it just replaces couple tags? It's fun to mess around with and an alternative. I had fun messing around with it since I like prompting loli and some others here do too then it might be a fun thing to experiment with or a tool to keep in your kit for later before civitai inevitably takes it down.
>>17861 They write vectors that aren't assigned to those tokens in most CLIP models, too. It will be more effective as long as those vectors are interpreted correctly by your SD model. If both your CLIP model and SD model are finetuned very specifically, or you're very aware of how to work around them, they won't do much, but for the average person, they're great. >>17853 >>17862 Those TIs don't break any of Civitai's policies in any way whatsoever, and the creator is one of the biggest names on the site, so if it does get taken down, that would be a pretty huge fuckup on their part.
>>17696 Teri Terio Norah Shinji Inuboshi
Would a Lora be enough if I wanna generate simple stick body types like the Doremis or it's something that can be done with just proper prompting
(3.33 MB 2048x1024 ComfyUI_19761_.png)

(3.16 MB 2048x1024 ComfyUI_19763_.png)

(3.36 MB 2048x1024 ComfyUI_19755_.png)

(3.15 MB 2048x1024 ComfyUI_19766_.png)

>>17865 Jamie Hewlett
(1.49 MB 1024x1280 14800-3832840685.png)

(1.52 MB 1024x1280 14815-2579287593.png)

really liking the negative values of detail tweaker lora. protruding ribs and midriff weirdness get smoothed out nicely, but now i keep wishing for a version that only affects the body so i don't have to sacrifice the background as well.
>>17868 Whats the detail tweaker lora? Really like your stuff by the way, looks very nice.
>>17868 You could probably achieve something like that with block weights.
Age slider seems to work pretty well for guaranteeing loli. I usually have issues getting the model to not age up the characters but the embed is actually pretty good and comfy to use. Slightly degenerate pictures so I spoiled but it's nothing too wild.
(3.14 MB 2048x1024 ComfyUI_20054_.png)

(1.11 MB 1024x512 ComfyUI_20040_.png)

(3.12 MB 2048x1024 ComfyUI_20031_.png)

(2.89 MB 2048x1024 ComfyUI_20020_.png)

>>17870 pretty useful. thanks.
>>17872 Squeezer LORA also works pretty well if you haven't tried it, well its what I use sometimes if I want to hard force a loli on a model that has trouble genning that
>>17874 Never tried or heard of it, what's that?
>>17875 its a LORA that will lolify or age up characters depending on the strength, harder values will age them down more and the opposite strength for older. https://civitai.com/models/38551/squeezer-experimental
(2.96 MB 1024x2048 ComfyUI_20156_.png)

(3.15 MB 2048x1024 ComfyUI_20120_.png)

(3.37 MB 2048x1024 ComfyUI_20163_.png)

(1.18 MB 1024x512 ComfyUI_20115_.png)

>>17868 I personally like the less detailed backgrounds as well.
all sd shit got very stale during June
>>17879 is there anything new that's cool for us? Extension wise or LORA making wise?
>>17856 Catbox please? I'm a fool for simple pussy
(3.04 MB 1536x1920 14885-1013113436.png)

(2.90 MB 1536x1920 14930-726990004.png)

>>17879 *continues to prompt bathing 1girls*
>>17880 it looks like there was a new optimizer added to sd-scripts called Prodigy, but it's meant for DyLORAs and i haven't bothered with those yet
>>17868 do you apply the negative detail tweaker on first gen or during inpainting? you could go in afterwards using inpainting anything extension to easily mask the body and then apply the negative detail tweaker to get rid of the ribs
>>17886 first gen and disable it for inpainting to add hair detail and eye detail. i'll try your way the next time i gen something.
>>17888 very cute shimapan
(2.21 MB 1024x1536 00044-3022973360.png)

(2.14 MB 1024x1536 00041-3022973357.png)

(2.17 MB 1024x1536 00025-880853559.png)

(2.19 MB 1024x1536 00024-2715240119.png)

>>17888 mind a bit of fan art?
what is adetailer even good for?
>>17892 nothing afaik. saves you like 3 seconds, i guess?
>>17892 lets you run txt2img without needing to inpaint, so you don't need to be there
>token merging doesn't work with tiled diffusion Kind of shame, I hate the 22s/it I get with it.
>>17620 >oet holy shit man you're the best, i've been hoping for this for months
(1.73 MB 960x1440 catbox_dard7r.png)

(1.62 MB 960x1440 catbox_sy0aju.png)

(1.70 MB 960x1440 catbox_iovfqn.png)

(1.98 MB 960x1440 catbox_xk90rt.png)

https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/ngojkApT i am back on my shit, will probably do kasen next since I already started sorting data for her and then maybe akyuu/kosuzu? open to suggestions
https://anonfiles.com/T3z206j2z0/yassy_safetensors Dead link from >>14147, can someone reupload?
(1.71 MB 1424x960 catbox_wt2iqz.png)

(1.56 MB 1424x960 catbox_67la3d.png)

>>17900 https://files.catbox.moe/lkxlzm.zip I have a yassy LoRA, think it should be the same one.
>>17900 I have other yassy loras from who knows where so here they are just in case https://anonfiles.com/2cXaLdx4zb/yassy_7z
(2.69 MB 1280x1920 catbox_6v7kgl.png)

>>17898 Thanks I just finished training an updated version if you're interested. It's under the same name so make sure you move or delete the old one and reload the lora or restart webui.
>>17900 Different anon with another, different one. https://pixeldrain.com/u/RXhvuGiD
>>17906 Even if true, considering that 99% of ai stuff is tagged it's not a problem, artcels are just making up shit every day to cope Although there really is too much of sameface low effort pajeet slop
(1.49 MB 1024x1536 18880-2768133169.png)

(1.62 MB 896x1536 18556-3387214972.png)

(1.74 MB 1024x1536 18622-3686400008.png)

(1.53 MB 1024x1536 18728-1984345725.png)

>>17903 >>17898 I've tried to train this artist a while ago too. Seems like it could do some good bondage gens, alot of bodyhorror though, especially if more than one char on the image. Wanna give it a try?
>>17909 I kinda know two arthoes irl and they don't care about SD pretty much kek
>>17910 That's nice, sure beats having someone seethe as soon as you talk about SD or show off your gens.
>>17911 I'm a turd worlder though, it feels like SD outrage is pretty much exclusive to western twitteroids. Most people either like it or don't care
>>17912 >western twitteroids Yes this is correct but I don't really use shitter or any other social media myself.
>>17913 Yeah me too but you still know the type
(2.98 MB 1536x1920 14962-3512705288.png)

(3.22 MB 1536x1920 14994-583048402.png)

>>17908 NTA but I'd like to try it out
I hate how civitai turns every image into jpeg now, the import to hydrus script only supports pngs.
https://aibooru.online/posts/35285 >source: this thread welp
>>17919 He posted several gens from this thread before looks like
>>17920 for a /here/ shitter I'm disappointed in the other gens they have uploaded
>>17921 looks more like someone who copies everything for internet points
>>17922 I mean nothing wrong with posting stuff from here but probably would be better if he didn't link it in source
anyone tried making a lora of Diana from Pragmata yet?
>>17923 If it really rustles your jimmies I won't link the source for anything from here that I upload in the future. This isn't much of a sekrit club though when it's linked in places like the gitgud repo.
>>17925 i would prefer it if you didn't link us, personally
>>17925 I think linking it is a good thing. This place definitely isn't a sekrit club and it shouldn't stay such or else the community is going to die out as people get bored or move on.
>>17908 Some of these seem too dangerously realistic in my opinion
>>17925 I guess it depends on the aibooru audience. Fresh blood would be good if it isn't a bunch of braindead redditors.
>>17929 thankfully danbooru isn't social media, and aibooru is one step removed from danbooru itself, so one can only hope
I'd like to keep this place as niche as possible but yeah I'd agree with the whole "not having new blood stagnates the community" thing. I think the fact that we're centered around loli is more than enough to keep some redditors away for now.
(2.42 MB 1920x1536 15074-4154543186.png)

>come here during /h/ cunnycalypse >never got banned anyway huh
>>17928 The last two? It's just because of the dogshit body style coming from aom2 >>17917 Here you go, the pass is the 3-char tag of this place, just use whateverbooru tags on the pics of this artist with this https://anonfiles.com/A8UeY5xfz5/1eq2_7z
>>17933 cute
Okay, since more anons seem indifferent I'm going to continue linking the source in uploads, but I probably won't upload anything more from here for the time being anyway. If you don't like that decision then follow the same process that artists do on Danbooru and submit a takedown request. If it isn't your gen then deal with it. By the way, all history is tracked and searchable, so this does nothing: https://aibooru.online/post_versions?search%5Bpost_id%5D=35285 >>17931 I know this place was originally created because of anons getting banned but I personally just consider this a slower and (usually) calmer board for discussion, not for loli specifically.
Finally gonna retire the Chitose lora by RandAnon. Idk if they still hang out here, but I've had a lot of fun with their work. So thank you. This 40hara lora from civit has proven surprisingly useful
>>17937 Oh shit, I gotta give it a try. I hope it passes the choco test...
>>17938 I can already tell you in advance, it doesn't really work well by itself. Maybe you could work your magic though? You can force it using that lora I linked in (I think) the last thread. *slaps on 0.3 kasshoku lora* .... but results vary.
>>17939 Ouch, well I'll give it a try later.
(1.82 MB 1000x1280 00056-2689175392ed.png)

(1.93 MB 1024x1280 00051-2689175387.png)

(1.81 MB 1024x1280 00032-2167006483.png)

>>17939 Damn this looks pretty rough but I'll try it too. Also some more fan art.
>>17933 catbox?
(1.65 MB 1024x1536 00016-3900229407.png)

(1.20 MB 1024x1536 00021-1753540821.png)

(2.44 MB 1024x1536 00003-1584638735.png)

>>17942 A bit fried but sorta usable with lower cfg kinda
>>17941 >>17942 Yeah, like I said, I'm pretty happy with the lora, cause it will replace the one trained on 14 images that I was using for months to gen Chitose. But it's not good at doing choco for sure. Also, I presume we're all talking about the same one, right? https://civitai.com/models/89535/40harashimahara-style-locon >>17944 What cfg do you use? I usually just stick to 7
>>17945 >What cfg do you use? I usually just stick to 7 Usually 7 as well, but I feel like this one is better at ~5
>>17940 >>17946 Nice gens, anon >>17939 >kasshoku lora I was gonna suggest this too, that LoRA fixes it a good portion of the time.
(1.80 MB 1024x1280 20465-3670044618.png)

(3.32 MB 1920x1536 15180-2897303813.png)

>>17943 https://files.catbox.moe/2o9apc.png flipped 90 degrees clockwise for img2img to make the ai not sperg out
(6.57 MB 2048x2560 15123-1875961872.png)

uh oh managed to paste the hires fix one
>>17950 thanks dude
(1.91 MB 1024x1536 catbox_colh2s.png)

(1.88 MB 1024x1536 catbox_qlp105.png)

(1.88 MB 1024x1536 catbox_wk3hq5.png)

(2.06 MB 1024x1536 catbox_wmb2pw.png)

>>17939 Finally got some time to test this and yeah that kasshoku LoRA really does help, so thanks for that one. Getting some good fucking gens right now, 0.3 zankuro sauce I kneel...
Hey all, Been a hot minute since I hopped into the thread, started a new job, much less time to work with. But I'm currently working on adding in some new lr schedulers into the easy training scripts, starting with cosineAnnealingWarmupRestarts. I plan on adding more though, a whole host of them. Stay tuned, I'll definitely tell you guys when I get them working.
(1.53 MB 960x1152 catbox_41vva7.png)

>>17940 c@box? I wanna study this model/style mix
(2.22 MB 1024x1536 catbox_vttnul.png)

(926.77 KB 512x768 catbox_0pkj17.png)

(923.31 KB 512x768 catbox_2v7cz3.png)

(940.32 KB 512x768 catbox_lnbeeu.png)

>>17957 thanks much, the results are very adorable
Anyone knows how to reliably gen this body type? It seems like you can kinda get it if you're doing just pin-ups, but once you prompt sex, the body gets more defined. I think it's the character lora doing it for this guy. https://pixai.art/artwork/1628104301363621779
>>17961 you got all the metadata right there dude loli model, loli artist loras, (curvy, plump, mature female, large breasts, wide hips, thick thighs:1.4) in negative
>>17963 Well I tried downloading that model and adapt the prompt to mine and it didn't really work.
>>17964 it looks like it's barely working for him tbh just grab some loli artist loras and based64 and work from there
>>17965 I'll try when I get time again.
I am sorry but based66 is pretty bad
>>17953 Nice gens, good catbox, glad Kasshoku helps. Have an angry nun and maid
I feel kinda lost what to gen or train next still got some time off but somehow don't know what to do...
(1.81 MB 1024x1536 00114-2206824001.png)

>>17971 so, have you stopped posting on civitai yet?
>>17969 Sometimes less is more...
>>17972 Never posted there, I'm not the usual Marnie anon.
>>17970 fine art maybe? or at least art outside of *booru/pixiv i've had good results training off wikiart
>>17973 Fewer angry maids? No way :( If you have any actual criticism / advice on how to improve the quality the post, lmk. I'll go back to lurking for now >>17942 May I have a catbox for these btw, they look great!
>>17976 Nope sorry, I meant actually being dressed is hot. Showing less skin is "more".
(1.51 MB 1024x1280 00182-1186020321_cleanup.png)

>>17976 Pretty sure they were all same prompt https://files.catbox.moe/ooi0v6.png
(1.42 MB 3584x1152 image.jpg)

>>17892 >what is adetailer even good for? https://imgsli.com/MTg3ODcw for slider comparison. Here's a non-cherry picked gen. I'd say it's great for gens where the faces don't occupy 20% of the image. If the generation is good ol' 512x768 1girl cowboy shot, adetailer isn't as useful, but I still find a slight in quality even then. That being said, I'm on a 3090 so the increase in gen times are negligible to me.
>>17980 No Zankuro mixed now?
>>17981 Still using Zankuro, 0.8 40hara mixed with 0.3 Zankuro.
>>17967 I see people barely talk about it compared to the previous versions, what flaws did you see?
>>17983 in my use it was fine but it required different prompting than based64 and i'm used to that model
(1.09 MB 960x1440 Shrine Maiden.png)

(350.37 KB 682x697 river prompt.PNG)

>>17977 I completely misinterpreted. Nobody wants to be that one anon in the thread that spams their unwanted *cough* watermarked *cough* gens. >>17980 Easily accepted. Goodbye for now river prompt, you've served us well.
been a long time since LORAs came out, did we come out with a basic layout for training settings here?
>>17988 what do you mean by basic? most use the easy script from that one anon here and I think use the standard settings there the best 'parameters' are the ones you like or have found to be working as most people have different tastes
>>17989 standard settings were 1e-4 and 5e-5 for text I think, makes sense I doubt there's much you can do there beyond the training data you have
(1.10 MB 1024x1536 00042-2450667977.png)

(1.67 MB 1024x1536 00167-4094059495.png)

(1.63 MB 1024x1536 00078-1506251086.png)

(1.71 MB 1024x1536 00025-2865243968.png)

(1.20 MB 1024x1536 00790-1563018902.png)

(1.57 MB 1024x1536 00116-2269491370.png)

(1.57 MB 1024x1536 00340-623042907.png)

(1.88 MB 1024x1536 00291-3471150020.png)

>>17991 Cute, thanks.
does the SS uniform LORA still exist? Was it ever updated?
(2.52 MB 1280x1920 00147-406269090.png)

(2.47 MB 1280x1920 00151-675493691.png)

(2.71 MB 1280x1920 00178-1123010383.png)

(2.78 MB 1280x1920 00172-1123010377.png)

visors and bunny ears
Installing the Lora easy training script, it's asking me which version of torch to install ( 1.12.1, 2.0.0, 2.0.1) any benefit to installing an older version or should I just go with the newer one?
>>17997 2.0.0 is the latest stable version, just go with that one
(3.28 MB 1920x1536 15256-719879006.png)

(2.14 MB 2496x1536 15477-2740854050.jpg)

(1.42 MB 1024x1536 15512-789686434.png)

(3.71 MB 1536x2304 15563-807896028.png)

>>18002 Yes, the plan is to put all the maids through this prompt.
(6.35 MB 2048x3072 catbox_40jodj.png)

(6.35 MB 3072x2048 catbox_xjtyic.png)

(6.61 MB 2048x3072 catbox_jbw0xq.png)

(1.39 MB 1024x1536 catbox_n4dphp.png)

hey all, I finally got the new scheduler working in the scripts. for some reason I was getting some really stupid import errors, so I had to quite literally create an installable package for it. but it works, I haven't had time to thoroughly test it, but in theory it should lead to better results once we figure out settings for it. hope people on this board have been doing well, I haven't had much time lately, so most of my free time has been spent working on the scripts, and a small amount of genning in-between. last image is from the setmen lora I baked a bit ago but forgot to share here: https://civitai.com/models/92795/setmen-inspired-style-locon
>>18004 ah forgot the link to the scripts. https://github.com/derrian-distro/LoRA_Easy_Training_Scripts#changelog changelog in that link.
>>18004 is there any documents from kohya or anyone else that explains what benefits the new scheduler has?
>>18006 there isn't any documentation on what it does for lora, apparent from the lesser implementation of cosine with restarts, effectively it allows you to have higher lr's with more repeats that pure cosine with restarts because the max lr decays per restart. It designed to allow the model to learn without hitting local minima, which a restart can usually kick a model out of. since it's annealing though, the restart shouldn't force it to diverge at all, as each restart has less lr to work with. As far as I know, the implementation here is one that is used in a lot of other AI projects, and is, to the best my knowledge, one of the better, if not best, schedulers for learning rate. when it comes to lora though, while it's not set in stone what it does, I have seen seemingly better results in less steps. however I didn't really tweak the settings while I was testing, I mainly hope that the community will help me figure out how well it works.
>>18007 getting this error when using it >TypeError: CosineAnnealingWarmupRestarts.init() missing 1 required positional argument: 'first_cycle_steps'
>>18008 odd, I installed it 5 times at the very least before pushing it to main, and it worked every time.
>>18004 I found a bug that prevents restarts happening with cosine with restarts if you have warmup_ratio on. Pushed an update to fix that a few minutes ago.
what's everyone's preferred way to replicate a body part like a specific nipple shape? i've just been using controlnet animelineart with inpainting like a chud. results aren't good most of the time though
>>18011 use a lora from an artist that does it that way
Are there any da vinci (fate grand order, rider?) character Loras? Only one I found is a bit too realistic for me.
>>18004 Very cute stuff, I love that last one with the blonde girl.
Is there a non-impaint solution for heart shaped pupils?
(1.67 MB 1024x1536 15662-2568887286.png)

(2.01 MB 1536x1024 15790-1359640106.png)

(3.64 MB 1536x2304 15869-3514248343.png)

>>18001 cute angy maids
>>18016 ohfuckthefirstone
>>18016 Did you inpaint faces yourself on the second one or it was adetailer?
>>18007 You are so right about this scheduler. I was training locons with this scheduler, slightly modified by one guy though, for approx past 1.5 months. Obviously its an improvement over cosine with restarts. CWR have a moment where it just starts to learn nothing, like below 1e-6 or so, but this one has very smooth training, especially good with styles, at least imo.
>>18016 That first image is very nice.
(36.75 KB 768x512 tmp7f_331wz.png)

(588.08 KB 768x512 20912-196110807.png)

(4.44 MB 2304x1536 15926-3024238080.png)

>>18018 https://files.catbox.moe/wbijjq.png inpainted with CN tile colorfix. the 3 faces were done with more care, but the crowd areas were just inpainted completely at 1920x1536 or something silly like that. artistic circles to get it to add the crowd or something crowd-ish. >>18017 >>18020 thank. it was kinda annoying to get right and luckily i prefer 1girl, smile.
(1.70 MB 1024x1536 00175-2132661308.png)

(1.75 MB 1024x1536 00035-317137048.png)

(1.47 MB 1024x1536 00085-2220485594.png)

(1.50 MB 1024x1536 00052-1039035891.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg rebaked murata yusuuke new version is definitely better though it isn't perfect and i still like the old one sometimes
(1.59 MB 1024x1536 00018-2298438322.png)

(1.89 MB 1024x1536 00013-1054174102.png)

(1.97 MB 1024x1536 00140-686656441.png)

(1.53 MB 1024x1536 00795-1630834935.png)

>>18021 >CN tile colorfix I still don't have idea what cn tile is, never bothered to go above hires fix.
>>18024 It basically adds detail for the current resolution. It's kinda like latent upscaling that doesn't change the composition and works at lower denoise. Also works without upscaling, you just throw an image at decent res (like after hires fix) to img2img and it conjures details, see pics in >>17762 Normal tile shifts color as you can see, colorfix should fix that, but it adds seams if you're using tiled upscalers for super res.
>>18021 I was wondering for a while, do you change your prompt or model / other parameters when upscaling or do you keep them same as before? Also when you inpaint the faces with cn tile, are you in whole picture or only masked mode with res that's higher for that area?
(1.68 MB 960x1440 catbox_go92mk.png)

(1.85 MB 1152x1320 catbox_47mniz.png)

https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/7p5EhC6I added kosuzu, might revisit her later to try and get the text more consistent because right now it's goofy as fuck. akyuu is my last planned 2hu for the time being
>>18022 Nice to see you are still baking. I've run out of artists to train and I'm kind of burnt out. I just scroll twitter occasionally to find someone worth doing.
>>18028 river prompt is never truly retired. also catbox pls? i have one of the gens with the prompt but it's buried under a shitload of other folders lol >>18029 i have slown down, but i'd attribute that more to vidya than burn out, but it's definitely there
(2.34 MB 1280x1920 00002-2266773172.png)

(2.46 MB 1280x1920 00003-2266773173.png)

(2.20 MB 1280x1920 00001-854745220.png)

(2.61 MB 1280x1920 00004-167895601.png)

(1.97 MB 1280x1600 catbox_6ja8x3.png)

(2.10 MB 1280x1600 catbox_k2bn0m.png)

(2.11 MB 1280x1600 catbox_3a3wu5.png)

(2.67 MB 1920x1280 catbox_ju1gg6.png)

Porforever lora. Cute chibi fantasy style https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/TZtTFDyI >>18004 >Setmen Good taste. I took the chance to compare it to mine and I'd say yours is better especially in the eyes. Seems like random crop is the way to go which is what I ended up doing for this one
(2.59 MB 1920x1280 catbox_4yv15b.png)

>>18033 w i d e
>>18027 nice have a Kosulu for you work anon!
(3.65 MB 1536x2304 15952-465349377.png)

>>18026 subject to change, again. i upscale once with hires. fix and then in extras tab with remacri. i make the big changes before the second upscale and run it through img2img at low denoise to make stuff uniform again. for inpainting i usually keep the model and the prompt the same because cn tile will usually keep things about where they were regardless. culling the prompt to only the style tags should reduce the chance of any weirdness appearing as it normally does. i use only masked for everything. high res and high denoising strength for the background, anything from 0.5-1 and low denoising strength 0.1-0.3 for the body and 0.05-0.1 for the face for it to keep it mostly the same while adding sharpness. inpainting with high denoise and high resolution will usually turn eyes, nose and mouth more realistic and isn't very faithful to lora(s) (atleast in my case). if i used regional prompter with multiple characters and expressions, i remove the other characters from the prompt when focusing on one of them to keep the expression and switch the prompt when i move to the next one. >upscale hires. fix output 4x with remacri >downscale it back to something reasonable >enable cn tile with colorfix preprocessor (or colorfix+sharp depending on mood) >inpaint the background in 4 or so parts at high denoising strength while barely overlapping with the outlines of the character, 1920x1080ish or 1536x1536ish is max res for me >inpaint the character from neck down at low denoising strength >inpaint the head at minimal denoising strength at 1024x1024 or so >inpaint the eyes with minimal denoising strength at 768x768 or so (both eyes at the same time for matching patterns) >inpaint some small areas i want to highlight or add detail to in the background i stopped using tiled diffusion because it causes seams and inpaint with cn tile colorfix is bad at fixing obvious seams/discolorations and lowering the control weight or not using it at all will usually cause seams of it's own. lowering the control weight can work to change or fix stuff, but the edge is just very obvious when it's against something simple. it's not really a big deal but i want my sky and other simple colored gradients to be uniform lol.
(127.83 KB 960x1200 08.jpeg)

Greetings By any chance, did anyone downloaded this loha? https://civitai.com/models/88935/guizhencao-style-loha https://www.pixiv.net/en/artworks/108903587 The author took it down by unknown reasons, I really wanted to use this style in some gens...
>>18037 I have a "guizhengchao.safetensors" lyco downloaded recently which I can only presume to be that one. https://pixeldrain.com/u/7E6EWfgx
I'm getting an insane amount of black images even with no-half-vae in ARGs. No idea what is causing it but some prompts just produce black images 100% of the time, moving the tags around can fix it sometimes. Normally it's maybe 1/5 gens but it depends on the prompt or loras. No-half normally fixes but it makes gens 3 times slower. Any ideas? I'm also using --opt-sdp-attention --disable-nan-check on torch 2.0
>>18039 Stop using NAI VAE
>>18040 I'm using clearvae which I think is a better nai vae. What is better? I don't like kl-f8 or vae-ft-mse because they fry the shit out of everything
>>18041 >because they fry the shit out of everything sounds like a skill issue ngl clearvae might be based on the nai vae, in which case the bug is still gonna show up
(2.25 MB 1280x1920 catbox_2p1pem.png)

>>18041 If you actually want to use clearvae then you should also use --no-half-vae I do remember frying being an issue at first but I don't remember what changed since I haven't had that happen for the longest time. Probably just stopped using shit models and loras
>>18042 How is using a vae a skill issue? Those two reduce the sharpness of an image for more saturation, and in a lot of cases fry them. It doesn't really have anything to do with the prompt. >>18043 bro? >>18039
(2.42 MB 1280x1920 catbox_h2n8qb.png)

>>18044 Whoops missed that part. Sounds like it's time to move to kl-f8-anime2, ema-560000, or mse-840000. Maybe post frying examples
>>18044 >How is using a vae a skill issue? refer to what >>18043 said >Probably just stopped using shit models and loras
(1.53 MB 896x1232 22061-1864073100.png)

(1.58 MB 896x1232 22063-1864073100.png)

>>18045 Very simple prompt on based64 I quickly did. Pretty obvious which is vae is which. Right is clearly less sharp looking at the hair, and this is just a mild example. >>18046 Based64 is shit now? And this happens on all models. Anyway I'm just going to assume the black image bug still has no fix other than no-half.
>>18047 >he thinks that's fried
>>18048 >this is just a mild example Nigger
>>18049 >"guys look at how bad this is!" >the example is just fine you're the nigger for providing a shit example
>>18050 >"guys look at how bad this is!" But I literally said >this is just a mild example Are you illiterate as well as blind, retard?
>>18051 >Are you illiterate as well as blind, retard? >>18041 >What is better? I don't like kl-f8 or vae-ft-mse because they fry the shit out of everything >they fry the shit out of everything
>>18052 Yes, they can do exactly that. You're right my example is not the best but I that's exactly what I said in my post, and more to the point it shows my case that mse is worse, albeit slightly. Take my word that it can get worse or don't. I don't give enough of a shit to sit here genning better examples to convince you, and you're just going to go out of your way to strawman and argue semantics anyway. I'm more interested in a solution to the black images.
>>18047 hair and eyes are muddier but the fingers are better
>>18053 >I don't give enough of a shit to sit here genning better examples to convince you, and you're just going to go out of your way to strawman and argue semantics anyway.
>>18055 /hdg/ seems more your speed champ
>>18056 where do you think we are nigger
>>18041 >kl-f8 or vae-ft-mse because they fry the shit out of everything They don't
>>18058 Okay I'm doing xys now and seems like they fucking do. Jesus that was something was out of my mind for a while, but seems like 84k and f8 literally makes some models fried by default (like counterfeit). Blessed2 vae looks pretty good actually. Hopefully no half vae helps with the NaN bug
VAEs are incredibly misunderstood and overlooked for being an entire third of this damn system, jesus. I guess it doesn't help that there's only like 4 of them and then 20000 copies of those.
>>18060 There's just so much stuff to adjust as well that I kinda decided to dismiss VAEs for a while. But blessed2 (NAI-based) legit looks the best right now. Probably never gonna use 84k again. I had certain loras getting fried with 84k and I usually solved it by switching to euler a and reducing cfg, but with NAI-based loras everything works just fine. Clearvae looks kinda bad so far, but Im pretty surprised with blessed2 performance.
>>18060 help us understand sensei
>>18047 This is why I've been avoiding the non-nai vaes, I really hate the blurrysharp look they make.
>>18036 Don't you get bad color blending or badly aligned elements while using only masked? What's your padding? I usually use something high to avoid misaligned elements, but sometimes the area is more or less saturated compared to the rest. Do you use cn inpaint together with cn tile or is it a waste of processing time? Having to inpaint backgrounds in multiple smaller parts sounds annoying. Shame cn tile has problems with tiled diffusion.
played with kl-f8-anime2 vs blessed2 for a while and came to the conclusion that blessed2 can somewhat iron out really tortured lora mixes but normally just has a little less color and detail another fucking thing to twiddle
>>18061 >>18066 Clearvae is some merge between the NAI and SD VAE iirc. Blessed is a modification of the in and out layers of the VAE which is literally the same as applying color correction in post. Just do that yourself manually or use an extension like https://git.mmaker.moe/mmaker/sd-webui-color-enhance
>>18066 >another fucking thing to twiddle Exactly. Was hoping that 84k would be "just works" option but yeah it really isn't. >>18067 Why if I can just use blessed2, I'm pretty content with how it looks. Maybe a bit more saturated would be better but yeah I guess I can tweak it with color enhancing. Also I notice some slight changes compared to NAI beyond colors. xformers disabled
(2.23 MB 1024x1536 15953-1225284198.png)

(2.33 MB 1536x2304 16042-1643716371.jpg)

>>18065 not that much. padding doesn't seem to matter that much. cn tile with colorfix preprocessor only. keeps the elements (mostly) coherent and keeps the colors (mostly) in place. it's not perfect and i do still try to use the natural edges to my advantage like in this picture top edge to the top of the chair and left edge to the outline of the character is one of the only masked areas.
>>18068 >Why if I can just use blessed2 It can cause issues like banding with hires fix due to how the in/out layer modification works. To be fair any form of color grading can do that to 24-bit RGB images but it causes less issues if it you limit the amount of passes you're doing over it (hires fix causes the color correction to happen twice with the blessed VAE in use, that extension only does it once). There's a bit more technical info here. The guy who came up with the idea for the blessed VAE admits it can potentially make images look worse but they claim it ultimately can balance out due to the denoising process, but that's dependent on just how much you denoise it. https://github.com/sALTaccount/VAE-BlessUp/issues/1#issuecomment-1459017645
>>18070 Also I should've mentioned this but the blessed VAE impl is also strictly limited to brightness/contrast/gamma. That extension actually does some more advanced trickery by converting the color space to something to adjust only the chroma which prevents any chance of blowing out the image.
>>18070 >>18071 https://huggingface.co/NoCrypt/blessed_vae Wasn't this fixed? >blessed-fix.vae.pt : blessed VAE with Patch Encoder (to fix this issue) >blessed2.vae.pt : Customly tuned by me Although it's pretty hard to see the differences between the versions
>>18072 The patch is so it works correctly with hires fix (mentioned in the issue I linked, maybe I shouldn't have linked to a comment). It doesn't affect images generated w/o hires fix.
>>18072 >>18073 Use the download link here and get the unpatched version to see how it behaved before it was fixed. https://github.com/sALTaccount/VAE-BlessUp#download
This vae talk makes me wonder, are we still stuck with the last different vae being kl-f8?
>>18073 >>18074 I'll just admit that I'm a bit of a brainlet to understand this correctly right now. I'll read how VAEs actually work later I guess, don't want to make wrong conclusions >so it works correctly with hires fix Is it gonna work correctly with img2img and hires fix? I don't really care about lowres images tbh
>>18076 >Is it gonna work correctly with img2img and hires fix? He says it will look worse if your denoise strength is low, also as I mentioned prior. See: https://github.com/sALTaccount/VAE-BlessUp#patch-encoder
>>18077 I see, thanks.
>>18077 Just to translate what this means, it's as if you were to give an image more contrast, save it, and then try to do the inverse process to restore the original colors as they were before. Not a completely analogous comparison but think how RAW footage is processed if you're familiar with that. Basically once you save something down to a lossy format it's going to be impossible to bring those original colors back 1:1 so you will always have a loss in detail trying to do so. In this case it's potentially less exaggerated because img2img/hires fix at a higher denoise should clean up any artifacts produced by doing that inverse process. If this description makes no sense then the one on that github should be sufficient enough.
>>18080 Catbox for these bunny girls?
>>18082 do you have a link to that 40hara lora? found a locon but that's it
>>18083 Pretty sure I'm using this one: https://civitai.com/models/89535?modelVersionId=95315 Probably the one you found.
(1.49 MB 4000x2945 xyz_grid-0143-146740118.jpg)

>>18082 lmao i was curious what the kasshokuhada did and got a pretty good chuckle when i walked back to my pc and saw the grid
>>18085 Chocofication, you love to see it.
did anyone try out the copier lora method yet? Seems to be a chink/jap thing right now https://rentry.org/copier_lora
>>18087 I don't have the autism to comprehend how theyre doing this, but I'm hoping this can be used to retrain a LORA onto another model I have a few AOM2/3 LORAs id like as NAI ones.
>>18085 I wonder if it has to be trained on an output image. What if you train 2 loras, where each image has variations of the same subject with a concept you want to extract the difference out of.
>>18089 Wrong reply, meant >>18087
would a baker be willing to redo the eximizu lora? i love the guy's style but the existing lora produces some uniquely mangled hands.
>>18091 >eximizu i couldn't find an artist by this name. got a link?
>>18092 no, and i'm a little confused now.
(1.78 MB 1280x1600 catbox_sefyhc.png)

>>18037 >>18038 They also posted it on huggingface https://huggingface.co/TLFZ/guizhencao-loha I also started to make own version in case it wasn't found but finished it anyway https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/CRsSUSLb
anyone have a good workflow for uncensoring mosaics/bars?
>>18095 Wait is it any decent now? What are you running?
>>18097 Well I find it somewhat enjoyable, I'm running 30b due to my hardware limiting me but I got to test my friends raw 65b which was way better.
>>18095 catbox? also what textgen model?
>>18095 if you think local textgen is dangerous you certainly have not tried gpt4 yet, I'm annoyed it has literally ruined any interest in local textgen for me.
>>18101 >I'm happy with local textgen for now Good, unironically keep it that way while you can. For what it is, Wizard is solid. GPT4 is like the forbidden fruit... I guess quantized Airoboros 65B did just come out <1hr ago, which is trained via GPT4 and supposed to be better than Wizard, but no benchmarks to confirm that yet. https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.4-GGML/tree/main
>>18102 Cool I guess I should tell my friend about this but I bet he already knows as he's really into this.
>>18102 actually is still uploading at the time I'm posting this lol, so in process of being released
Anyone know if there's a lora of north ocean hime?
>>18100 i deliberately never touched gpt4 for exactly that reason lol
It's a damn shame that you pretty much need female pubic hair in the negs to get a nice innie. I guess you need to inpaint if you want an innie pussy + a nice bush
>>18106 Yeah her. I'll fiddle with it and pray.
(2.61 MB 2304x1536 16114-1411932457.jpg)

(2.26 MB 2304x1536 16163-1237087124.jpg)

(2.53 MB 2304x1536 16211-1958499725.jpg)

(1.63 MB 1280x1600 catbox_9szvf0.png)

(1.61 MB 1280x1600 catbox_4dot9n.png)

(2.05 MB 1280x1920 catbox_6x5pjg.png)

(2.05 MB 1280x1920 catbox_a6ihto.png)

(1.70 MB 1280x1600 catbox_zryfn5.png)

(1.77 MB 1280x1600 catbox_pvivv3.png)

Jesus why didn't I think of this earlier
>>18112 now do madoka getting joyfully railed by her boyfriend while homura looks on heartbroken
>>18092 >>18093 imizu_(nitro_unknown) on boorus I_MI_ZU on twitter 威未図@いみず id: 243408 on pixiv
>>18111 >>18112 oh neat, I remember this artist from forever ago do share the LoRA if you manage to finish it
(2.16 MB 1280x1600 catbox_pbnqjk.png)

>>18116 madoka making homura eat her boyfriend's creampie out of her as punishment for spying on them while he looks away in embarrassment, not yet understanding that the nasty dyke deserves it quite a high-minded concept, very admirable work
(2.10 MB 1280x1600 catbox_smmt76.png)

Meduka wot r u doin
>>18114 Another artist whos art style degraded over time. How the fuck does it happen
>>18114 man no wonder the lora is so schizophrenic
how thick is too thick?
That's funny, the 40HARA LoCon basically turns her into Chitose.
(2.21 MB 1280x1920 00015-1492314927.png)

(2.40 MB 1280x1920 00029-2793383745.png)

(2.24 MB 1280x1920 00031-2965001434.png)

(2.30 MB 1280x1920 00170-507320373.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added sanuki-(kyoudashya) also realized i forgot to add metadata for the last two uploads, so that's fixed >>17714 >>17736 lmao took me a minute, but finally got around to doing one of these at least
(1.79 MB 1280x1920 00814-3896197017.png)

(1.84 MB 1280x1920 00080-925192603.png)

(2.44 MB 1280x1920 00064-621152989.png)

(2.61 MB 1280x1920 00080-2031007812.png)

>>18123 As it should >>18108 I think inpainting would probably be the most efficient way to go about it. Maybe some loras can help
https://gelbooru.com/index.php?page=post&s=list&tags=kso Anyone have a lora of this guy? I always liked his loli stuff. Also, any suggestions for other good loli artists? I like the Musouduki lora a lot but dunno what other good ones are out there
(1.95 MB 1280x1600 catbox_krhtqs.png)

(2.08 MB 1600x1280 catbox_lczg3s.png)

(1.93 MB 1280x1600 catbox_h6k4y2.png)

(1.68 MB 1280x1600 catbox_9oyyv6.png)

>>18128 Don't have one for them but I have a WIP of a kinda similar artist (Dragonya) in the meantime https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/PZlmTTIJ Having the same issue that I had with Innerkey where the proportions and features are right but the style and linework aren't quite there.
>>18129 catbox for the first pic?
>>17827 you are a master
For anyone who cares, here is an updated list of models wiped from CivitAI based on metadata I've been scraping, barring a few gaps here and there https://rentry.co/35fa9
>>18135 Man, their power hungry mods is retarded af seeing this much disappear.
>>18136 To be fair, I would wager that a solid majority of those were deleted by their authors, not mods.
(1.19 MB 1024x1536 00066-610863732.png)

(750.43 KB 1024x1536 00165-3887046894.png)

(1.11 MB 1024x1536 00228-1683783474.png)

(1.07 MB 1024x1536 00520-860291511.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added cirenk muscular female tag is best with it
(1.04 MB 1024x1536 00059-1000274068.png)

(1.19 MB 1024x1536 00066-610863732.png)

(1.24 MB 1024x1536 00858-2875930399.png)

(1.46 MB 1024x1536 00163-2378404098.png)

What are some good checkpoints/Loras/prompts that produce that kinda 3d looking texture opposed to the really flat looking images.
do you guys think sd upscale or upscaling directly with img2img is better?
I haven't checked in since April, anything new come up? from what I've seen on /g/ and /hdg/ shit has stagnated.
(642.82 KB 512x768 pinpon_sakana1.png)

(2.07 MB 1024x1536 ComfyUI_27821_.png)

(867.29 KB 512x768 ComfyUI_27850_.png)

(906.67 KB 512x768 ComfyUI_27846_.png)

Does BasedMix anon still drop by here?
>>18145 This looks incredibly weird
>>18146 I lurk just haven't done much LORA baking or model stuff for awhile
>>18145 This is incredibly soul
(2.08 MB 1280x1600 catbox_36n4be.png)

(2.40 MB 1280x1600 catbox_2kv4w7.png)

(1.91 MB 1280x1600 catbox_p9hes1.png)

(2.27 MB 1280x1920 catbox_8u9cht.png)

>>18148 Do you still recall your exact recipe mix for Based64? I wanna try something but using a different finetune instead of the hololive model.
>>18145 now that is something strange I made the same lora but have it under another name https://gelbooru.com/index.php?page=post&s=list&tags=mamei_mema+ https://gelbooru.com/index.php?page=post&s=list&tags=pinpon_sakana+ Is it another case of an artist with multiple names?
(2.76 MB 1024x1536 ComfyUI_28421_.png)

(2.66 MB 1024x1536 ComfyUI_28418_.png)

(969.35 KB 512x768 ComfyUI_28406_.png)

(1017.27 KB 512x768 ComfyUI_28396_.png)

>>18147 >>18149 i prefer weird artists since most people already go for training the most popular NSFW artists, and weird styles that are also NSFW are somewhat underserved
>>18152 interesting, would you mind uploading it so I can compare
>>18152 >Is it another case of an artist with multiple names? Danbooru is always the best way to verify this https://danbooru.donmai.us/artists/254968
>>18157 how many online alias do people need? well thanks anon gotta look there next time
>>18158 I have about 10 aliases that I have used during the pandemic alone
Someone managed to snag the Roxanne from Goofy Movie Lora before it was deleted from Civitai?
>>18160 why would she get deleted? I thought she had a thing for Max's dad, Goofy
LORA anons are we pruning tags anymore for character LORAs or nah?
(7.38 MB 2137x3896 00007.png)

happy 4th friends
>>18161 Because Civitai mods are dumbasses
(2.65 MB 1200x2000 catbox_lnrve9.png)

(2.52 MB 1280x1920 catbox_odxu71.png)

Haven't seen a Nahida posted here in a while
>>18166 Old and busted, where's the new hotness?
>>18166 I love Nahida
(2.14 MB 1600x1280 catbox_rrf73l.png)

(2.10 MB 1280x1600 catbox_dqqpo0.png)

(2.27 MB 1280x1600 catbox_73q3u0.png)

(2.04 MB 1280x1600 catbox_s96igg.png)

>>18168 uuuoooggghhh😭😭😭daughtermommywife💢💢💢 Mochikushi lora https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/bNsDGboI
(2.04 MB 1280x1600 catbox_qxy2jr.png)

(2.06 MB 1280x1600 catbox_ujqfa9.png)

(1.99 MB 1280x1600 catbox_s3uy0q.png)

(2.00 MB 1280x1600 catbox_ykxg6i.png)

kasshokuhada is probably my favorite lora right now
(1.52 MB 1600x1280 catbox_rd8b56.png)

(2.12 MB 1600x1280 catbox_mgwdex.png)

(2.08 MB 1600x1280 catbox_mfn998.png)

(1.77 MB 1280x1600 catbox_pjkpjy.png)

>>18172 counterfeit is overfeit
>>18173 That seems to be the case and it's easy to tell that a gen was made with it.
(1.93 MB 1280x1600 catbox_ty4uat.png)

Supposedly SDXL got leaked, anyone got it?
Asking here as well. Is there an alternative for merging multiple loras than the kohya script? I tried using the old easyscript .bat, which I still have lying around, but got a RuntimeError: "svd_cuda_gesvdj" not implemented for 'Half' error. It would be a nice quality of life addition if it could eventually be added to his new version, but I get that finishing the training UI took priority and I have to say that the result is pretty great!
>>18178 Well never mind. Of course I figure out what the error was causing immediately after posting. Out of habit I saved it as fp16 instead of float.
>>18142 Holy shit, catbox?
>a confirmed Stability staff is shilling the SDXL model on /g/ >keep eating it up despite it being shit and not a 1.0 release this is fucking sad
>>18181 comfyanon is literally employed by Stability now yknow
>>18181 even if it's partly shilling, people are just starving for something new in the field. SDXL is absolutely irrelevant (as of now) if you're only into anime stuff but it does look somewhat promising as a base model
>>18182 makes sense, he was clearly trained to be the most annoying shill imaginable
>>18184 Thank you maidanon
>>18177 i'm guessing that training loras on these is a faraway possibility right now?
>>18187 More than likely no. And even if you could, it would be like training SD1.x or SD2.x so not even worth the attempt.
wouldn't training an anime LORA off of SDXL just end up like training anime stuff on 1.5 models? Unless XL has anime data in it or something.
>>18189 Damn he's quick >>18190 Closer to 1.5 than NAI
(3.88 MB 1536x2304 16232-2808846475.png)

(3.65 MB 1536x2304 16247-1456026982.png)

(2.20 MB 1536x2304 16340-1864807712.jpg)

(2.23 MB 2304x1536 16373-37319236.jpg)

(2.63 MB 1536x2304 16382-2756946917.jpg)

(2.14 MB 1024x1536 16390-2226886624.png)

(3.74 MB 1536x2304 16432-4195949777.png)

Have you played with epinoiseoffset to get dark scenes? I've been using it in all my gens lately, even if it's just a little
>>18193 Catbox for middle pic? I find it difficult to get girls with slim thighs
>>18193 what model is this?
>>18192 >>18193 Cute and I should step up my regional prompting game. I really should try using it for single character gens as well.
(716.97 KB 800x640 00447-3786659018.png)

What's the best way to upscale with 8GB vram? Tiled controlnet? Tiled vae? Ultimate SD upscale?
(2.08 MB 1024x1536 catbox_0487eu.png)

(2.13 MB 1536x1024 catbox_6w0lto.png)

(1.78 MB 1536x1024 catbox_9nft6c.png)

(2.10 MB 1024x1536 catbox_h6izz7.png)

hey all, back again with an update. one I was working on for a week or so. https://github.com/derrian-distro/LoRA_Easy_Training_Scripts I did a bit of an overhaul of the UI, to allow it to fit on smaller screens. I also did a bunch of other things. YES, I will be updating either today or tomorrow for SDXL.
>>18199 I guess I prefer the old look with the subset being included on one tab but hey it makes sense for smaller screens, I still know anons running less than 1080p screens these days, looking forward to the SDXL update but just found out the files I downloaded are just compatible with ComfyUI right now, wonder when Vlad will make it work with the webui
>>18200 not sure when, but same. I honestly don't like node based systems. I will get the SDXL update out tomorrow after work. didn't have time tonight in the end.
>>18198 I usually go for hires fix until about 1024 x 1024 resolution, or something like 960 x 1440. And then use Ultimate SD upscale + Controlnet Tile from there >>18183 What is SDXL even? Is it just a shiny new model??
>>18202 just trained with an ever higher resolution, 1024x, and knowing how it worked with LORAs it might have made the generated outputs with this model a lot more detailed or sharp.
>>18203 Thanks for the info! I'll wait for it to be working on A1111 and then see what it can do. Hope it leads to some Anime stuff soon as well
>>18204 To specify, perhaps the refiner portion of it can be used for Anime images as well? I'm not very knowledgeable on this stuff
(2.11 MB 1024x1536 16442-5890485.png)

>>18195 https://files.catbox.moe/hm093k.png i don't think i did anything specific? full body probably maybe comes out slimmer. https://civitai.com/models/65214?modelVersionId=94703 >>18196 based64 >>18197 maybe? i rarely use it for other than seperating characters >>18194 how'd you manage that? for me it barely makes things darker and just fries stuff if i go above 3 weight or so.
>>18205 refiner seems to be really weird
>>18207 I'd love to see some examples of what it does if you have any
speaking of based64, still waiting on based anon for an answer on the recipe mix, plz respond
>>18209 for 64? >AOM2 hard x hll3 final x NAI final pruned x defmix red that's all I can remember for 64, it was made with basic mixing
Fucking Genshin giving Klee such a cute outfit uoooohhhh actual first time I spent money on the game fuck
(1.77 MB 1024x1408 catbox_ozemdt.png)

if any bakers need artist suggestions, feel free to ask :^)
>>18210 AD the 3 and then defmix red 0.5? Ok, I'm gonna try something with a different finetune and see how to works.
>>18210 >>18213 (Me) and this is for v3 right?
>>18208 sorry didn't save any, check cuckchan's sdg thread.
>>18210 wait did you just mix aom2 with hll3 with merge, not add difference?
>>18216 not him but it's 100% an add difference.
(1.38 MB 1024x1536 00060-2618295873.png)

(1.56 MB 1024x1536 00017-1777172024.png)

(1.70 MB 1024x1536 00060-498734012.png)

(1.40 MB 1024x1536 00868-321036722.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added zako-(arvinry) >>18212 shoot my guy >>18218 thanks. catbox for first one?
(1.33 MB 1024x1536 00653-1105386835.png)

(1.53 MB 1024x1536 00014-406421122.png)

(1.84 MB 1024x1536 00344-3565393788.png)

(1.81 MB 1024x1536 00010-1260260402.png)

Do any of you have experience using "inpaint models" - and are they needed / useful if you're using a model like based64?
>>18192 Kantoku pussies so aesthetic
(1.32 MB 1920x1920 00011-1793236067.png)

first time here got a question, is it cool to ask for catbox/lora links here?
Didn’t know an Hll5 vtuber model was made, not sure about the LORA merge but I’ll see if the one that wasn’t merged with a LORA works well for a new mix
>>18224 yea, within the past couple weeks
>>18206 >based64 Still using based64 as well, a solid model! Took a long break from prompting, seen an explosion in models but end up coming back to based64. For the darkness, you need to throw in some dark prompt stuff as well. Same prompt on all 4. https://files.catbox.moe/i5ohep.png the silhouette of a naked girl is backlit by soft moonlight. starlight twinkles on the surface of the water. color graded to cool shades of purple and blue, medium breasts, nipples. night, dark, pitch blackness <lora:epiNoiseoffset_v2:2> Negative prompt: (low quality, worst quality), (bad eyes), (missing fingers), (broken legs), (bad_prompt_v2:0.1) Steps: 50, Sampler: DPM++ 2M Karras, CFG scale: 5, Seed: 502690660, Size: 512x576, Model hash: fdd93be86a, Model: Based64-Mix, Denoising strength: 0.61, ENSD: 31337, Hires upscale: 2, Hires steps: 25, Hires upscaler: Latent (bicubic), Discard penultimate sigma: True
anyone make a LORA of Klee's new outfit aside from the chink on Civitai?
>>18228 crazy jpeg compression look going on there
>>18230 Yeah. Only noticed after posting. I fucked up and saved the original as jpg then did some inpainting. Switched to saving jpg for a while. Wanted to save space since my outputs folder is over 12GB now. But webui's jpg compression is so shit, went back to png. If I re-save a .png as .jpg wtih Irfanview then I get a visually identical result. Wish I could mass process my pngs while preserving the metadata.
>>18231 you could bump up the quality under saving images/grids from the 80 to like 95-99. still saves a lot of space with less artifacting.
>>18232 Thanks. Don't know how I missed that. On 95 it's good enough that I can't tell. Still got the existing 12GB. Probably need to prune it, some of the old gens are looking really bad now. Saw a thread where exiftool could maybe transfer metadata. If I get sufficiently bored, will try.
>>18226 The only thing I dislike about based64 are the noisy backgrounds, but that usually goes away when using a style lora. >>18233 You can't transfer and keep the metadata as "parameters", it will say something like it's not supported or some shit, so you'll have to do it as either a comment (won't work when drag and dropping the image into prompt field) or an exif comment, which should probably work, as that's how webui saves jpegs.
(2.04 MB 1024x1536 catbox_vmdh0t.png)

>Download a bunch of new LORAs off civitai >Have fun mixing them and making all sorts of extremely degenerate shit >mfw after I come down from the rush of having new toys to play with
(2.72 MB 1280x1920 catbox_1825cf.png)

>>18229 First attempt but it already looks way better than the civitai one so here you go https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/ON0DDJgZ
>>18236 Make new LoRAs instead, you will never lose the new toy high feeling because you can always go back and improve upon it.
>>18237 thanks, baking my own but knowing most anons here bake with the same settings I do the only thing that might make it different is restarts.
>>18222 >>18192 >kantoku-new-v8-better-nudity Where do you get this Lora? Simple slits are the best.
(2.45 MB 1024x1536 catbox_yfwnhm.png)

(1.97 MB 1024x1536 catbox_xp2a5q.png)

(1.55 MB 1024x1536 catbox_na4gi9.png)

(2.03 MB 1024x1536 catbox_11we3h.png)

>>18236 haha dude that's wild i'd never do such a thing
hey all, like promised. https://github.com/derrian-distro/LoRA_Easy_Training_Scripts/tree/SDXL first thing to mention. I only tested it to the point that I was able to train a lora. so I can't say for certain it's bug free. but i'm sure it's fine. report bugs as you find them please!
(2.96 MB 1280x1920 catbox_myrdd1.png)

(2.88 MB 1280x1920 catbox_xfc9oz.png)

(2.98 MB 1280x1920 catbox_t5dt0p.png)

(3.04 MB 1280x1920 catbox_ekcwo1.png)

>>18239 I think the only thing I'm doing the same as most people here is low dim but interested to se the results either way
(2.02 MB 1280x1920 catbox_8yzjf7.png)

(2.22 MB 1280x1920 catbox_v7igq6.png)

(2.11 MB 1200x2000 catbox_vpd37p.png)

(2.25 MB 1280x1920 catbox_5yoi27.png)

Extra test Lily / Toketou
>>17708 >>17742 Can I request a catbox for these they are very nice
>>18184 Is this inpainting? How to use characters Loras for multiple girls without turning them all into twins
Does a Lora for the artist g-reaper exist?
>>18246 Regional prompter but I did inpaint the eyes and the hand. https://github.com/hako-mikan/sd-webui-regional-prompter
>>18250 cunnies where
>>18250 Never used regional prompter before, but how do you segment pics when the characters overlap like in your post?
(302.95 KB 1872x1304 regionalprompter.jpg)

>>18252 You just split the prompt into segments with ADDCOL, it just works but expect some touching up and inpainting.
>>18253 Did it give the left girl's arm a proper skin color despite it being in the second zone or did you have to inpaint that?
>>18254 Yeah she got the correct skin color but it can get genned with the wrong color at times.
how are the new nvidia drivers with LORAs and SD? I remember someone saying to stay on 531.79 because the VRAM usage on the newer drivers were a lot more different
(1.26 MB 1280x1600 catbox_tktbau.png)

u u ua ua
(539.41 KB 800x640 catbox_fyl7v3.png)

(2.42 MB 1280x1920 catbox_pgah7m.png)

So it seems that you can use regional prompt as a substitute for composable lora. And not just for having separate loras on different subject, it seems to ignore negative prompt as well. Sucks for anything trained on low quality source material but great for getting the most accurate results for small details and very accurate style transfer.
looks like the new tiny senpai doesn't have too much art on pixiv, just one page weird. Anyone know how making LORAs from anime screencaps works
(1.14 MB 640x960 catbox_3hwyqo.png)

(799.62 KB 640x960 catbox_zs0yyx.png)

>>18258 It's also great for if you have a lora with no metadata and want to figure out what it is. These are both without a prompt, only difference is one has regional prompter enabled and generation mode set to latent
(1.22 MB 1024x1536 00716-3029466859.png)

(1.38 MB 1024x1536 00245-734767594.png)

(1.36 MB 1024x1536 00025-627753845.png)

(1.26 MB 1024x1536 00062-1379268977.png)

(1.55 MB 1024x1536 00878-2230897671.png)

(1.79 MB 1024x1536 00005-3607542954.png)

(1.97 MB 1024x1536 00006-509503469.png)

(1.54 MB 1024x1536 00014-998059078.png)

>>18259 the same way as with regular artwork? There is a thread with an old work flow on how to get set up but its a pain
Has anyone ever gotten any of the auto MBW checkpoints working? I have a bunch of photorealistic models that I want to mix together to use as a base for an AOM like base model but I don't want to do the mixing manually if I can help it.
Working on a rentry for all the loras I've made, hopefully making it a little more accessible. There's a few there I haven't posted here yet https://rentry.org/zp7g6 Almost done retraining a lot of my older loras and I will be changing a lot of the previews but just wanted to get something going for now.
>>18265 some of your preview images give the feeling of "I forgot to take my meds" lol Looking forward to seeing it fill out
A Waifu Diffusion guy just popped into /g/ saying they are doing a test drive finetune on SDXL in a similar NAI format. https://boards.4channel.org/g/thread/94563422#p94563665 It's also Kohya training compatible already so if anyone wants to do a test run on training a LoRA or something then go for it.
>>18241 These are absolutely lovely. Please do go on.
Jesus fuck wish I started using NAI vae earlier man. even considering occasional blank gens it's worth using over 84k. I used to have to go to like 4 cfg to be able to use some loras, now it's all great even at 10
(1.15 MB 1024x1536 00067-1524445066.png)

(1.43 MB 1024x1536 00000-3400083048.png)

(1.44 MB 1024x1536 00001-2418506470.png)

(1.64 MB 1024x1536 00020-4051307498.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added op-na-yarou >>18269 what model are you using where you even need a CFG as high as 10 >>18270 river prompts are timeless
(1.02 MB 1024x1536 00411-2103753594.png)

(1.17 MB 1024x1536 00030-3080825879.png)

(1.72 MB 1024x1536 00036-1472582206.png)

(1.50 MB 1024x1536 00312-1775659993.png)

>>18271 This looks like a fun one for style mixing but the weather is too nice for genning waifus right now.
>>18271 >what model are you using where you even need a CFG as high as 10 I don't need it, but it's still nice to be able to go there.
Is cosine with restarts annealing with warmups just cosine with restarts actually using the warmup ratio?
even if we do get an anime SDXL model wouldn't the LORAs have to be trained on 1024x? I know anons here are already doing that but I wonder how many low vram users will get filtered
Getting back into training loras, what's the new meta? Did locon shit actually turn out to be useful in any way? What about optimizers and samplers, still adamw8b and cosine with restarts?
>>18279 locons are good but doing basic LORAs are still good, we have anons still using adamw and some are using linear, cosine, and cosine with restarts. The new meta I guess is training at 1024x which is funny because this was before SDXL came out because that has basic gens of 1024x
>>18280 >1024x damn my 6gb vram ain't gonna be able to handle that I think I can run it with grad checkpointing but it will actually take forever to train
(2.14 MB 960x1440 02187.png)

Take me to church It's been nice to not prompt exclusively Chitose
Rentry with all the loras I've made and remade up til now is complete https://rentry.org/zp7g6 Hope this makes it easier to discover loras you might not know the name of but would recognize the look, because at least for me that's the biggest problem I have when looking for loras. I'll populate the empty spots and probably redo some of the jankier ones like allcharacters18 soon. Open to suggestions if anyone has any before I post this anywhere else.
>>18278 kohya says they can be trained on a lower resolution but he also says the quality would very likely be worse
>>18283 What's your reasoning for generating everything using the method in >>18260? Most of the previews look bad and this is coming from someone who has found several of your LoRAs good to use.
>>18285 Isn't it done to figure out what a LoRA is / does without going through your archives or randomly prompting until the LoRA does something? Probably easier to generate preview images without a prompt too
>>18286 I get that but that should be something the end-user does if they're unsure. If you made the LoRAs you should know what they do and how to best provide an example for them.
>>18285 I didn't cherrypick any of the previews which is part of it but it's much easier to generate previews all at once and it's very clear what the style is supposed to be. This is also another issue I have when the uploader has examples but you can't really tell what is and isn't the uploader's artistic liberty.
>>18287 How do you feel about a comparison like this to show the differences?
>>18289 A lot better. Providing the latent example is neat but nearly nobody else is going to care about that and it really only suits a use case like I mentioned in >>18287 imo.
Anyone else getting this weird bleed-over after updating? Hadn't updated in a few months, updated and now my loras are bleeding over between txt2img and img2img. I'll try to upscale one image and it will use the lora from txt2img and vice versa for some reason now.
(24.31 KB 700x994 cover.webp)

>>18241 Catbox please? I've been trying to find any lora of skinny girl with puffy nipples, anyone know if that exist? Something like this one
>>18292 read the filenames alternatively get the userscript https://gitgud.io/GautierElla/8chan_hdg_catbox_userscript
(2.23 MB 1280x1600 catbox_98e3q2.png)

(2.13 MB 1280x1600 catbox_saf02w.png)

>>18292 Try jyuiro (e10) https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/rFtAhRJb Maybe boukoku daitouryou (president) but it's a little rough around the edges
>>18280 >new meta I guess is training at 1024x It's not a new meta wtf are you even talking about, even the dude that trained 1024x said it's worthless
(1.94 MB 1152x1664 00111-3878257061.png)

(2.02 MB 1152x1664 00133-3624651839.png)

Bratty hacker corrected.
>>18271 As always funny to compare my loras from four months ago against a newly done one Nice job anon!
>>18296 can you share the lora?
Trying to get back into training since I've been unable to use my computer for almost 2 months now. This is more targeted at you ez scripts anon since you're frequently here, what's a good default and what's the new format the script saves stuff at? I was seeing some shit like ia3, dylora etc and wasn't sure what was better or how to see what format they would be.
>I already filled up a 16 TB hard drive with a ton of training data and models >My Pre-ordered NAS doesn't come in until the end of the month fucking kill me
>>18297 where is your lora uploaded? i checked gayshit but found another one of the same artist from a different anon i presume
(2.12 MB 1152x1664 00109-3220766507.png)

(2.00 MB 1152x1664 00100-2071089946.png)

(2.14 MB 1152x1664 00107-854494144.png)

>>18298 https://mega.nz/folder/ZvA21I7L#ZZzU42rdAyWFOWQ_O94JaQ Sure thing. Lora's a bit finicky though, you'll need a fair amount of tags to reproduce the whole outfit. I didn't bother pruning anything, including the mistagged HKI Bronya. On the plus side, I suppose the outfit is flexible-ish, If you're going to try to recreate >>18296, you'll need another separate lora to get the panties aside.
>>18301 I posted it ages ago >>10039 but the link seems dead
(1.77 MB 1024x1536 catbox_i9eqav.png)

(1.73 MB 1024x1536 catbox_pszva1.png)

(1.37 MB 1024x1536 catbox_vr6q0e.png)

(1.90 MB 1024x1536 catbox_mzm22q.png)

today, in lora mixes that i wish worked better
(1.97 MB 1280x1600 catbox_usjrx4.png)

(908.29 KB 640x960 catbox_u3e45p.png)

(917.02 KB 640x960 catbox_9p4j6q.png)

(1000.63 KB 640x960 catbox_5ja2au.png)

(928.69 KB 640x960 catbox_5t1j16.png)

Why "don't upscale images" should be enabled by default. I also probably wouldn't have made this comparison if easyscripts gui didn't keep ignoring this option, though hit seems to have stopped ignoring the gradient checkpointing option (easyscripts gui has been great by the way, this is really the only issue I've had with it). Image 1 is cache latents, no upscale. Image 2 is cache latents, upscale. Image 3 is random crop, no upscale. Image 4 is random crop, upscale.
>>18309 not familiar with comfyui but in a1111 that happens if you're trying to use a latent upscaler with a denoise below 0.6
>>18310 I'm doing what I mentioned before here >>18258 >>18260 This isn't hiresfix, just the raw output. The training data was almost entirely low res (below 1024px) and having upscaling enabled in the training options gives the pixelated results. On the other end of the spectrum, having higher res images and not using random crop result in bad aliasing. tldr if you want a idiot proof training settings use random crop and enable "don't upscale images"
>>18307 This prompt keeps on giving but I need a break from it before it kills me.
>>18309 When I load a toml file that has [/code][bucket_args.dataset_args] bucket_no_upscale = true[/code] It shows up on the GUI as checked, but it is not actually enabled. Thankfully it's the only line missing when I load the auto-saved toml
>>18311 oh, i see. that's really cool, i might have to start playing with that.
man this is incredibly useful. i'm going through my loras doing no prompt latent gens and it's matching up basically 1:1 with my experience on how well they work and what they're good for. ataruman: good anatomy, strongly pulls his style, i use this a lot at low power to give torsos a more interesting look if they're coming out too simple gagaimo: just mutilates pictures CLAMP: adds noise and screws up anatomy but carries strong color and style qqqrinkappp: glowing eyes, blush lines, stars and watercolors; delivers exactly as depicted
(615.31 KB 640x960 catbox_q5p4jg.png)

(723.77 KB 640x960 catbox_kyj3wm.png)

>>18315 It's great for revealing flaws or just showing any weird behavior a lora has. Or just getting it to look like how it's supposed to if the model is messing it up. It's just too bad this implementation destroys my vram use unlike the old composable lora extension
(1.80 MB 1024x1536 00014-3884705590.png)

(2.01 MB 1024x1536 00013-3884705590.png)

>gagaimo: just mutilates pictures i'm sorry that's been your experience with it i guess lol
(1.35 MB 896x1232 2778094797-2849112368-.png)

(1.47 MB 768x1056 2778094822-2825933715-_ _.png)

(986.77 KB 768x1056 2778094860-3959811942-.png)

(1.07 MB 896x1232 2778094790-3314896833-.png)

>>18316 this is gonna make it a lot easier to trim my lora collection abmayo, homare, mizumizuni, alkemanubis (do not open) >>18317 , he says, smugly not posting a catbox for me to compare with
(51.62 KB 1498x816 wtf.png)

Does someone else use the Hollowstrawberry Dataset Maker lately? I'm having troubles curating my images since the site FiftyOne doesnt load properly anymore
>>18320 >Hollowstrawberry Dataset Maker the what now?
>>18318 >this is gonna make it a lot easier to trim my lora collection why? this is like comparing the random noise of models to determining whether a model is good or not
>>18319 low quality, worst quality:1.4 and badpromptv2 do pull the style a lot better than my usual low quality, worst quality:1.2 and unspeakable, still mulches hands though >>18322 cause the latent gens i've done agree with my experience using the loras
>>18318 >>18316 >>18315 are you just empty prompting with a lora on?
>>18324 yeah i'm half convinced he's just shitposting
(930.05 KB 640x960 catbox_gu1hpa.png)

(1.01 MB 640x960 catbox_2janvg.png)

(941.53 KB 640x960 catbox_f0jlz8.png)

(2.05 MB 1280x1600 catbox_c7zfzp.png)

>>18324 >>18325 Empty prompt doesn't get great results a lot of the time but yes you can. Even just something like 1girl will improve it by a lot but there examples are empty prompt. For the last one I tried latent upscale
(2.35 MB 1280x1600 catbox_cptnfv.png)

(3.07 MB 1280x1600 catbox_ff0icc.png)

(3.86 MB 1280x1600 catbox_5naue3.png)

(1.94 MB 1280x1600 catbox_cz9296.png)

>>18326 *these Here are some bad examples
(1.18 MB 640x960 catbox_yy9eqk.png)

(1.11 MB 640x960 catbox_4dv4j9.png)

(1.08 MB 640x960 catbox_lhacpg.png)

>>18326 and here are the first 3 with the extension disabled
>>18326 >>18327 >>18328 we're devolving to pre-nai leak wd lol
(893.02 KB 2667x4000 grid-0037.jpg)

(1011.07 KB 2667x4000 grid-0036.jpg)

(987.28 KB 2667x4000 grid-0038.jpg)

(1.28 MB 2667x4000 grid-0035.jpg)

>>18323 >still mulches hands though 38 gagaimo 1 str - (worst quality:1.2), (low quality:1.2), (bad_promptv2:0.2), 3d, render 37 gagaimo 1 str- (worst quality:1.2), (low quality:1.2) 36 gagaimo 1str- (worst quality:1.4), (low quality:1.4), (bad_promptv2:0.2), 3d, render 35 no lora - (worst quality:1.4), (low quality:1.4), (bad_promptv2:0.2), 3d, render neat. 1.2 does have slightly worse hands. looking at the dataset again, the strange hand anatomy is probably because gagaimo draws hands in shitloads of different ways
>>18328 >>18327 >>18326 I had a bug with recent a111 where out of the blue the outputs start to ignore the prompt but have a resemblance of the loras used. It had a similar feel of fever dream, anyone encountered this bug? I usually had it appear when doing multiple batches, sadly dont have pics saved (was on colab)
>>18331 there isn't anything special about it. if it is ignoring the prompt then you are essentially just prompting with the noise of the model+lora. a lora skews the noise towards what was trained in the dataset, so obviously it'd be less random and resemble the lora. it's the same reason why aom can generate the vague appearance of a girl without a prompt moreso than nai or especially sd. you could describe nai as an sd model overfit on making anime girls, aom even more so
>>18305 UOOOOOH!! C-C-C-CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCUUUUUUUUUUUU UUUUUUUUUUU UUUUUUNNNNN NNNNNNNNNNNN NNNNNNNNNNNN NNNNYYYYYYYYY YYYYY!!! uunnnNNNNNAAAAAAAAAAAAGHHHHHH!!!!mCUNNY!!!!!!!!!!!!!!!! brarhhhhhhhhhhhhhhhhhhhh BLARGGGGGGGGGGG UUUUUNNNNNHHHHHHHHHHH UNNNHHHH OH GODDDDDDDDDDDD NOT THE C-C-C-CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCUUUUUUUUUUUU UUUUUUUUUUU UUUUUUNNNNN NNNNNNNNNNNN NNNNNNNNNNNN NNNNYYYYYYYYY YYYYY!!! 😭 😭 😭
>>18333 fucking lol
(2.50 MB 1280x1920 catbox_2rke7l.png)

decided to rebake kantokuaki99ishikei frankenstein lora because the double belly caused by shitty crops started to annoy me. seems to work ok now. maybe. we'll see.
bros I might have overheated on 1girl and now getting into trap territory time to stop
>>18312 One day I'll learn regional prompter / control net beyond fixing a single hand. It won't be today though I think .
Is anyone here left still trying to do finetunes? I need help with settings
>>18339 I enjoy regional prompting a lot but I only gave control net a try once and thought cool and then I forgot about it.
(2.72 MB 1280x1920 catbox_2hqgbd.png)

Controlnet tile resample + latent upscale is pretty nice and pretty simple. Now if I could just get her IN the jar
>>17518 archive script anon, just thought i'd let you know that it doesn't work with newer threads since they don't have "https://rentry.org/voldy" or "https://rentry.org/nai-speedrun" in the OP anymore. just a heads up. might be the same with /g/, but i don't even bother saving from that cesspool anymore
>>18325 >>18328 unironically genning with "worst quality" in the positives and upscaling with "masterpiece" is occasionally strong also i remember a guy that used regional prompting with no regions to generate a messy image for first pass and using that as the noise input for highres pass
(1.91 MB 1280x1920 00030-3612626944.png)

(2.79 MB 1280x1920 00076-1097025543.png)

(2.53 MB 1280x1920 00012-1095974455.png)

(2.88 MB 1280x1920 00250-1589701957.png)

(2.11 MB 1280x1920 00029-4224332513.png)

(2.22 MB 1280x1920 00208-2188907470.png)

(2.33 MB 1280x1920 00030-158025989.png)

(2.80 MB 1280x1920 00222-4187706424.png)

>>18343 thanks, updated with some other links until I figure out a better way of detecting threads. it seems you can't search by subject name in archiveofsins complicating matters.
Anyone tried this? https://rentry.co/ProdiAgy
>>18350 oh god not that schizo again
>>18350 Least obvious staber shill.
>>18351 It's that cockdealer nigger I guess? He even managed to make my prompts look like fucking garbage when he yoinked them.
>>18353 it is he's been posting his garbo on every AI board and schizoing out hard on /hdg/ what lack of attention does to a mf, at this point just troon out and get your fix from that
>>18354 Jesus Christ, he may as well skip the trooning step and just jump straight to the last step and do everyone a favor.
>>18354 Honestly he doesn't bother me that much, at least he's somewhat amusing. SD generals have been really fucking stale lately.
(2.25 MB 1024x1536 catbox_7wreb7.png)

mfw i installed regional prompter and enabled that setting that allows you to use a different sampler for hires fix and now i feel like i have too many things to twiddle
Does anyone know if there is a script that can sort your images into folders based on the aspect ratio buckets?
(1.32 MB 1024x1536 00088-3721360342.png)

(1.35 MB 1024x1536 00027-3957411354.png)

(1.54 MB 1024x1536 00004-1610028101.png)

(1.65 MB 1024x1536 00030-3310901731.png)

(1.32 MB 1024x1536 00128-2607603402.png)

(1.47 MB 1024x1536 00025-245490608.png)

(1.52 MB 1024x1536 00016-4247321183.png)

(1.65 MB 1024x1536 00361-2107444705.png)

Ezscripts anon if you're still checking here are you / would you be able to add prodigy to the training script?
>>18357 >clamp _ CardcaptorSakura_v10 Is there a source for this one?
>>18363 https://files.catbox.moe/jjofw1.safetensors i thought i got it from civit but none of the lora hashes match. maybe it's an old version.
>>18309 Oh, didn't realize it was ignoring the option! my bad, I'll look into fixing it
>>18362 yeah, I planned to, was busy getting a new computer! and working on SDXL support mainly. I got myself an A6000 so now I can do some really fun stuff
>>18367 Thanks for reminding me about sukumizu, time to coom.
>>18367 which mmo do you play? always looking for something to do besides training shit loras
>>18368 You're welcome, very enjoyable to gen so I will be genning more. >>18369 Super cute, I play a rather niche full loot PvP sandbox MMO called Mortal Online 2.
>>18370 >>18369 based and neetpilled
>>18344 this is equivalent to seed gacha
>>18370 holy shit anon you okay? Mortal Online 2 is such weird game where my blood pressure was going through the roof while playing it cannot recommend it
>>18373 Kek, yeah it can be a blood pressure increasing game but I do enjoy it. I'm also in a pretty big guild, the solo player experience is terrible in this game unless you plan on being a mounted archer asshole killing lone riders.
>>18366 Bless was just thinking what happened since you're usually prompt. Another issue I noticed is LoKR seems bugged, my fault for using network dropout with it (three times now) since it shows that error it doesn't support it, but when it finally decides to try it seems to be asking to ask for LoKR w1a and it's trying to use LoKR w1? I closed the program so I don't have the error in front of me.
>>18375 hmm... odd. honestly speaking I struggle to understand what kohaku does sometimes. not his code... he just has some interesting ideas... I was planning on doing a bunch of bug fixes today anyways, so i'll look into that as well
>>18372 isn't everything, though?
>>18377 >shorter and balder than my little brother due to seed gacha this is horseshit man
>>18376 Tried it again to see what the error was, gives some missing value. DyLora also gives me a failure when I try to use it but I've admittedly only tried both with DAdapt but I enabled decouple on additional args.
>>18359 zirba, sen (astronomy) if you like them pls
>Master, there is trouble! >She craves cock, you have to help her! The textgen stories are writing themselves.
>>18378 hahaha. shit, it's actually true.
>>18382 but I want the choco...
>>18385 She will be joining.
is this some kind of pedo board or something..? wtf man
(2.12 MB 1280x1600 catbox_ptqey7.png)

(2.35 MB 1600x1280 catbox_uaef6k.png)

(2.45 MB 1280x1920 catbox_kj1f7y.png)

(2.19 MB 1280x1600 catbox_1ph9jl.png)

(1.98 MB 1600x1280 catbox_6cspdt.png)

>>18381 Nta but I made a Zirba lora https://rentry.org/zp7g6#z
>>18380 oh that's interesting, the DyLora implementation that is there is kohya's because non-dev branch LyCORIS has broken DyLora for the time being. I just got done adding Prodigy to the dev branch, i'll look into this as well, and see what might be going on.
>>18380 https://github.com/derrian-distro/LoRA_Easy_Training_Scripts alright, added Prodigy, fixed the DyLORA issue (which was actually an issue with D-adapt) and removed support for dropouts with Lokr as they don't support them
I've been out of the loop for months, what is the difference between LoRAs, DyLoRAs, Lycoris, and these other networks?
(2.00 MB 1024x1536 catbox_sdg6u5.png)

(1.61 MB 1024x1536 catbox_vu549d.png)

microbikinis in the morning, leotards at night
>>18392 >>18391 Bless saved me just in time so I didn't have to load up bmaltais one. You're a life saver.
>>18390 I will try it ty
>>18394 cute
>>18393 Lycoris is a blanket term to mean LoCon, LoKR, IA3, DyLoRA, and LoHa. On a general case, the only useful ones are LoHa and LoCon. Kohya themselves support LoCon as well as LoRA and DyLoRA. DyLoRA is just effectively training a lora that is multiple dims so that it can dynamically decide what dim size each layer needs to be, but due to how long it takes to train, is fairly useless. My personal "these are worth it" list is, in order of usefulness: LoCon, LoRA, LoHa The rest I generally consider trash for various reasons. IA3: fails to work on models too far from the base... in entirety, doesn't really learn much in general, has had exactly 0 made that wasn't a pure dumpster fire. LoKR: similar issues to IA3, less problematic overall though, and could see some use if it weren't for the fact that it generally speaking needs stupid high dims to learn anything useful, and even then it suffers a lot of the same high dim issues as LoRA. DyLoRA: as mentioned above, takes forever to train, because you are effectively training multiple LoRA, or LoCon at one time. It's not bad per se, it's just generally not worth the extra time spent for what amounts to what is pretty much just a normal lora. LoHa: this is a bit if an odd one because it shines greatly with complexity. As in, if you want to train 40 characters into one lora, make it a loha. That's pretty much its best use case. I wouldn't use it for other things.
>>18395 Np man, I just happened to have time after work, and it was actually surprisingly easy to fix up everything.
>>18398 thanks anon, I would've asked in /hdg/ but its been real fucking slow there lately and sd/g/ has turned into a fucking cancerous cesspool.
Do we have any new shota loras or shota checkpoints? It's rather difficult to prompt shotas compared to lolis.
>>18398 Chara LoHA's I made before were pretty good. Trying out some DyLora's now, just finished baking so see how they turn out. They got recommended over LoHA which I'm familiar with but the training time isn't really a detriment since I just train them overnight and while I'm at work in queue. See how they turn out and might just go back to LoHA's though since file size is kinda nice to reduce. >>18399 Only thing I need now is a pause or way to stop a training queue that retains the queue. I can just close the thing and load the toml back up and repopulate the queue, I'm just spitballing 1st world problems at this point.
>>18402 >>18399 Also how do you input these values, part of why I've had some internet trouble shooting with another anon about why prodigy isn't working well, it needs these values and I don't know where to put them in with your gui. d_coef=2.0 d0=1e-6 decouple=True weight_decay=0.01 betas=0.9,0.999 use_bias_correction=True safeguard_warmup=False --min_bucket_reso 320 --max_bucket_reso 1024 --log_with all --lr_scheduler_type "CosineAnnealingLR" --lr_scheduler_args "T_max=2560" "eta_min=1e-7" The tmax is your step count.
anyone here made a lora of Aqua from Bombergirl?
I've been going through most of the basics (maid, officer, nun, nurse). Any recommendations for professions?
>>18405 gyaru knight-secretaries
>>18406 in pencil skirts
(2.34 MB 1280x1600 catbox_kxkh05.png)

(2.18 MB 1280x1600 catbox_1z034p.png)

(2.08 MB 1280x1600 catbox_xk5sm0.png)

(2.17 MB 1280x1600 catbox_1z3vib.png)

Been adding more loras, not sure if it would be best to add a changelog to https://rentry.org/zp7g6 but I also just don't want to keep spamming this link. Anyway the newer ones I don't think I've mentioned here are koahri, menthako, mukuton, tamaya, watanon, kareya, luicent, nezumin, and probably a couple more
>>18405 Day care worker, waitress(beer maiden could be fun and the hooters LoRA is pretty nice), tacticool operator, cheerleader, construction worker, idol.
>>18409 A changelog would be appreciated by hoarders like myself so we know what's new.
(869.43 KB 1024x1536 00888-1572903070.png)

(1.85 MB 1024x1536 00048-1104297048.png)

(1.92 MB 1024x1536 00263-3157818279.png)

(2.04 MB 1024x1536 00001-456169606.png)

(1.72 MB 1024x1536 00064-817729383.png)

(1.71 MB 1024x1536 00074-3658322828.png)

(1.67 MB 1024x1536 00183-3523902051.png)

(2.05 MB 1024x1536 00188-2584048855.png)

(2.63 MB 1280x1920 catbox_5f4fp4.png)

(1.71 MB 1280x1600 catbox_do6rx6.png)

(2.25 MB 1280x1600 catbox_ngd1py.png)

(2.17 MB 1280x1600 catbox_695yul.png)

Added Nanameyomi and a changelog https://pastebin.com/0nW739JH
>>18409 Some pretty good suggestions in here, I'll get around to them after going back to some Chitose >>18406 I'm not cultured enough to know what this is
(2.76 MB 1280x1920 00000-848609921.png)

(2.86 MB 1280x1920 00001-848609922.png)

(2.83 MB 1280x1920 00007-2017144758.png)

(2.64 MB 1280x1920 00006-3897398484.png)

river girls
>>18417 I'm sorry officer, at least only one of them is the river prompt and the rest are fountains. Yes I have a problem, please arrest me.
(2.07 MB 1024x1536 00024-3208664541.png)

(2.00 MB 1024x1536 00020-2446682213.png)

(2.11 MB 1024x1536 00042-1422773399.png)

(2.21 MB 1024x1536 00020-2713392132.png)

(1.95 MB 1024x1536 00060-4016046428.png)

(1.95 MB 1024x1536 00111-1139220640.png)

(2.11 MB 1024x1536 00042-1422773399.png)

(2.03 MB 1024x1536 00364-2769619759.png)

(1.55 MB 1152x1664 00009-3377747455.png)

Been missing since early June, any significant advantages and/or particularly good LoRAs?
>>18422 advances* Never phoneposting again
(2.00 MB 1152x1664 00012-3377747456.png)

(1.91 MB 1152x1664 00012-3377747455.png)

>when you just want to headpat your loli but she has other plans
>>18398 thanks anon, I was also out of the loop and wanting to know this
>>18405 girl scout
>Try prompting some mid-level MMO clown suits to see if I get anything funny >Still haven't gotten anything over-the-top terrible.
>>18399 >>18403 Setup the optional args and on every set I've tried training with prodigy it doesn't do any learning at all. Have you tried using it by chance to see if it's a user error (me) by chance?
>>18421 >>18424 Very cute.
>>18419 where did you get this version of NAI anon? >animefull-final-pruned_ema-560000_fp16.safetensors does it even make a difference compared to the leaked version from months ago?
>>18422 >>18423 Guess not, huh?
>>18432 SDXL 0.9 leak and just yesterday Waifu Diffusion's finetune of SDXL. They are meh at best
Anyone found a way to fix nocrypt's colab?
(1.57 MB 1024x1536 00225-3814530593.png)

(1.63 MB 1024x1536 00123-4158287699.png)

(1.93 MB 1024x1536 00368-1704172223.png)

(2.12 MB 1280x1920 00027-3636293756.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added kame-(kamepan44231) >>18431 i'm confused--i didn't use that version (unless they are the same as the original nai leaked model). i'm using the original ckpt file from the nai leak, with the hashes [89d59c3dde] or [925997e9]
(1.15 MB 1024x1536 00037-1584562675.png)

(1.43 MB 1024x1536 00084-285734160.png)

(1.71 MB 1024x1536 00093-1057975432.png)

(2.37 MB 1280x1920 00023-3207302509.png)

>>18435 >>18436 Cute style, now gotta compare with the locon that already was there... Btw, any atahuta lora out there?
(1.75 MB 1536x1024 catbox_lf2kxq.png)

>You came to the wrong neighborhood, Motherfucker. >>18435 Another very cute style, too many styles to use and not enough time...
>>18438 wew that's a good one
>>18439 Well the actual idea was to make my own version of this image but the end result didn't end up looking that great. I will give it another try later.
>>18440 Bah I just overcomplicate my shitpost gens.
Started using the batch face swap extension It's supposed to automatically inpaint pictures for you but I feel like it works only half the time for some reason >>18434 What's the issue? It's working perfectly for me
I haven't posted in /h/ in like MONTHS after the purge Today I found out i'm ISP banned from /h/ specifically What the fuck?
>>18443 you share a subnet with a subhuman
>>18444 The issue is actually weirder than that. I'm ISP banned from a lot of places but can randomly post sometimes in boards like /v/ and /vg/. Like there's a 50/50 chance my post goes through and if it doesn't then it'll probably go through if I change browsers. Doesn't work on /h/ for some reason though, I simply can't post there at all. Shame that site doesn't have any sort of tech support I could contact and try to get it solved because it's quite the bizarre bug.
>>18445 just buy a pass :^)
>>18446 Fuck the pass, I'd rather find ways to cope with no shitposting or just live on this site instead.
>>18447 >just live on this site instead. At least there won't be a tranny randomly handing out 2-3 days bans for what he thinks are OT posts.
>>18448 that tranny single handed destroyed /hdg/
>>18449 He's most likely a moderator for /g/ as well - or at least he keeps an eye on all AI-related threads on it.
>>18450 the mods on /g/ don't fucking ban anyone, everyone just fucking avatarfags and you even have a hard confirmed StabilityAI employee shilling, both examples are bannable offenses.
>>18428 Sorry about the really late reply! I am pretty busy during the week now. I haven't used it no, and I honestly don't plan on using it either. I only used it to make sure it was running at all. I've only had bad results with tuning optimizers like prodigy and dadapt. If you are using dylora, then probably should assume your required steps is 4-5x more at a minimum. Because they take a stupidly large amount of steps more.
>>18425 Np, I like explaining things I can. I end up putting way too much effort into trying to help others
>>18402 Yeah, that was actually something I was thinking about implementing, well the ability to kill the ui and store the rest. As for loha, I just don't like the extra penalty of gen time they have, so I don't train them for things that don't need them, like most character lora, or 1 style locon. Just a side note, but I can usually fit a character + multiple outfits into a dim 16 lora, and if not that, a dim16 conv dim 8 locon. Probably can do even smaller, if I wanted to. It's honestly amazing how little space is needed if you know how to train it.
>>18451 >the mods on /g/ don't fucking ban anyone They absolutely do lmao. They also ignore the guy that's asking to have dick pics sent to him in exchange for his Claude/OpenAI proxies.
>>18455 I guess just not on /sdg/ because that place is a cancerous attention whoring drama and corporate shilling cesspool
Finding new loras is so hard with how cucked civitai is and their new search is hilariously shitty too which makes everything harder Shame it's either that or digging around for megas, rentries and other weird hidden places that you'll never find if you don't know they exist
>>18452 >4-5x times more steps Wish that was mentioned at all in kohaku's page on that, the short summaries really do not help with knowing what I'm doing with them. If that's the case I guess I can see why none of mine have worked out, I've been training them like a lora with 2k-3k steps or so. >>18454 So Locon's are pretty good? I tried to train one earlier and it had about 2k steps and barely got any training on it at 32dim 16 alpha with matching conv dim / alpha 32-16. I was using my old settings that worked pretty decently for Lora's since I'm not too familiar on conv dim / alpha settings and just ballparking them. That being said LoCon works pretty good for charas and concept? I see Dylora shilled a little but if training takes that long I'm gonna avoid it all the same and only other one I've experimented with is LoHA and that felt okay.
>>18455 lmao you jinxed me, I got banned for spamming when I was calling out the fucking cancerous tranny shitting up the thread
>>18459 But can you beat my single post 3 days ban?
Any particular upscaler I should use for high-res fix or do they all just fall under "personal taste"?
>>18460 kek that some weird collateral damage does anything fun happen in aicg?
>>18461 >do they all just fall under "personal taste"? absolutely not It depends on the image/model, remacri is a jack of all trades
>>18463 agreed, remacri and lollypop are my go to neutrals because it doesn't fuck up my generation, I don't really use anything else.
>>18462 nonstop third world mental illness >>18461 noisy and detailed -> simple and clean latent, remacri, lollypop, 4x ultrasharp, 4x animesharp
>>18462 >does anything fun happen in aicg? Like I've said, some kike is setting up a massive blackmail operation by giving his Claude/OpenAI proxies to desperate AI chatbot coomers who send him dick pics. Yes, really. See picrel, it's more or less a CYA statement that says "well, I TOLD YOU I would blackmail you so you have no reason to get angry".
>>18466 Oh fuck, this reminds me of that scumbag tranny that was blackmailing kids into taking HRT injections.
>>18466 This has been going on for days, maybe weeks even. The /g/ tranny jannies are most likely in on it since they hand out bans for minor shit (including minor off-topic) more or less instantly.
>>18463 >>18465 wait what? I have Latent but none of the other ones mentioned as an option. Haven 6 different vaiations of Latent, none, lanczos, nearest, ldsr, r-esrgan 4x, ScuNet and SwinIR as options.
>>18467 >scumbag tranny that was blackmailing kids into taking HRT injections I'd like to say I'm surprised but I'm not.
>>18469 got to download them https://upscale.wiki/wiki/Model_Database >>18470 It didn't happen that long ago maybe last year or 2, I just remember it was an unironic trannycord and on agame server and through some scummy dox threat the victims were forced to buy or make the tranny drugs and take them and to show proof not only of taking them but to detail their effects and changes. I would not be surprised if this is the same guy
>>18469 Those are the default ones, you can grab more here. https://upscale.wiki/wiki/Model_Database Put them in "stable-diffusion-webui\models\ESRGAN" remacri, 4x animesharp, 4x ultrasharp and 4x ultramix are the recommended ones I find that lollypop doesn't work anywhere near as well as people claim If you're feeling lucky "latent" and "r-esrgan etcetc anime 6b" are the ones you should try, could change your image for the better and it makes certain style loras shine (eg cutesexyrobutts) but you need to use them at like 0.6
>>18471 >and through some scummy dox threat the victims were forced to buy or make the tranny drugs and take them and to show proof not only of taking them but to detail their effects and changes. Absolutely talmudic. Total tranny death.
>>18448 reminds me of /vt/ god I hate that retard janitor and the mod enablers "block and delete their post until either it sticks enough for mods to accept the ban or they complain" since telling him it's actually a vtuber you're posting (little known gawr gura) is bannable offense
this is the most active ive seen this place in a way kek
>>18475 (me) in a while*
>>18466 that's the ick on eck guy, he's been selling access to unbanned proxies for dick pics too lol
why not just AI generate the dick pics?
>>18474 I find the whole situation with the Pippa/Phase connect Janny being found out to be one of the people running that cancer twitter account BannedVT memes hilarious
>>18477 Yeah, it's some guy they call "ecker". Has he been doing this for long? Do people really not understand the consequences? The OP in OPSEC should stand for own penis now. >>18478 >why not just AI generate the dick pics? Would probably sniff you out.
>>18479 Wait BannedVT memes was run by a janny? Shit I missed out, I stopped checking into /vt/ (outside of /vtai/) right before the 2nd tempus males debuted.
>>18480 He could sniff my AI generated asshole kek
>>18482 all three of them
>>18482 >>18483 kek It's either a tranny, a faggot kike (redundant, i know) or both. Which means that it's 50/50 between jerking off to them and using them as blackmail material.
>>18481 It was revealed during some new Pippa cucking drama and some Panko bait threads, literally all threads about the pippa cuck narratives were being deleted every second, saw 200 posts deleted in one thread kek
>>18485 Good lord, I'm glad I don't pay attention to all that, even when you aren't playing for a team involved in the drama, it just gets so tiresome.
>>18458 Yeah, all of my styles are locon, and I have a few characters that are locon as well. All of them took only 1600 or so steps at batch 2. So like 30-40 minutes on my old 3060. I think they are amazing, especially for smaller dims. But I may train very differently than most. I don't know what most people are using, because I found amazing settings that work every time for lower dims, and haven't really needed to look into changing it more than small tweaks as things came out or changed. I'm not home right this moment, so I can't share my config, but I can share it when I get the chance.
>>18471 >>18470 >>18467 Are you referring to the r9k groomer?
>>18488 Is that who that was? I thought those were two separate incidents
ControlNet outpainting do go hard
>>18487 I'll take it when you can share. Also LoKR still doesn't seem to work
>>18493 LoKR trained fine for me, not that I tested it beyond actually training it. pretty much something I just didn't care to even try, I consider LoKR and IA3 pretty useless overall. anyways, templates: https://files.catbox.moe/6o1c2l.zip
I don't know if >>18471 and what >>18488 is referencing are the same thing, didn't the r9k/"reiko" shit happen 5 years ago?
Invoke 3.0 looks pretty good, they finally tidied up the UI so it's not a clusterfuck anymore and they added (multi)controlnet. Might give it a try later, the only drawback I can immediately think of is the lack of a danbooru autocomplete extension. https://youtu.be/A7uipq4lhrk
>>18495 Yea thats what it was >5 years ago holy shit where did the time go?
>>18494 Chara one I assume it was supposed to be locon or is it actually lora?
>>18497 You tell me, my last 7-8 years are a complete blur
>>18498 I usually train lora for characters, even multi outfit, dim 16 for like 4 ourfits are fine usually. but I just tak on conv dim 8 alpha 1 if I need locon.
>>18499 yea, my life 2017 and onward has been a depressing blurry mess
>>18501 Ditto
do you guys have any tips for consistently generating an OC character? do you just have to use the same/similar prompt over and over and get lucky?
>>18503 generate a handful and then train a lora on them lol
>>18505 that was me a couple weeks ago but in 4 months
how's the latest version of the webui? Is it compatible with SDXL yet or is there an extension you need?
>>18508 From what I read, the base SDXL model loads normally but the refiner model is still an issue. There really isn't a point yet to use SDXL at this current moment for anything anime or porn however.
Tried to do a small challenge. Can you quickly get a similar image just by prompting? Pose replication would obviously be a lot easier with controlnet than just using prompts. The result would need some inpainting, and it's missing an actual background. This is just a proof of concept. Left is the original btw (if that wasn't already obvious).
>>18510 That's actually a pretty fun idea.
(2.30 MB 1280x1600 catbox_eibchc.png)

Could probably do better with more time but here's my attempt
>>18512 Not too shabby! Glad you enjoyed the idea
>>18510 Here's my second attempt. Shimahara probably spent quite some effort on that great original image. And here I am, making a stable diffusion speedrun out of it. Pretty enjoyable. I think I just convinced myself to show 40hara some support. I wouldn't mind a physical copy of some of his art either. After all, Chitose is my favorite... Holy this has backfired
>>18505 I'm not gonna lie, I really like the Anime > 3D LoRA even if it's slightly on the gacha side. I think it might work really well with styles like fizrot's. Also no highresfix for me right now, I can smell the dust heating up and I think it might ignite sooner than later.
>get banned from 4chan for "spamming" sdg thread >sdg thread currently full of spam and attention whoring faggotry I honestly don't fucking get it anymore and I have been visiting on and of since 2006
>>18517 What do you expect from the same tranny jannies that are aiding a reiko 2.0?
>>18516 Yep, it's blursed. I still like it though.
(751.28 KB 640x960 catbox_g1ybvm.png)

(889.94 KB 640x960 catbox_tprgjb.png)

(765.05 KB 640x960 catbox_qk1hh8.png)

Regional prompt updated a while ago and now does nothing without a BREAK or ADDCOMM, ADDCOL, etc. Though this also made me realize that the previous behavior for a single region was doubling everything including the lora and prompt. I was thinking I might have to go and redo all my preview images but having the lora at <lora:2> isn't the worst thing. Though I will have to update the disclaimer on the rentry From left to right: - Original preview, everything is doubled - "Correct" preview, end result is everything at (prompt:1) and <lora:1> - Potential new preview, prompt is normal and lora is doubled. Not too different but should lessen anomalies like weird hands
how does negative lora weight work again? literally just put a -? why/when though?
did voldy ever do local Locon/Lycoris support on the webui?
>>18523 upcoming 1.5.0 release/dev branch
https://github.com/cheald/sd-webui-loractl >only works on dev I'm going to have to switch, aren't I
>one of the trannies on sdg finally got banned there fucking is a god lol
Interesting post for training : https://boards.4channel.org/g/thread/94884114#p94885928 >>pagefile >I linked this to a few anons on /sdg/ before, but the problem is that windows never over-commits memory and the stupid nvidia cuda DLLs reserve more RAM than they will ever realistically use. >You can patch the DLLs and it'll stop allocating excessive pagefile. >https://stackoverflow.com/a/69489193
>>18527 someone should do a test
>>18526 Which one of the thousand residing there?
>check into /vtai/ for the first time in a while >its a bunch of fat chubba and furry chubba posting what the fuck happened???
>>18530 tranny janny-aided redditor invasion and takeover as usual
>>18530 >fat chuuba sorry that was me, was gone for a week so delivered everything at once here's uoh to make up for it
>NEW AGESA UPDATE COMING >NEARLY 3 MONTHS LATER >it's just for ddr5 ram speeds and stability above values only aspies, trannies and marketing teams care about FUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU
>update kohya >breaks >reinstall >can't batch size 5 anymore Useless trash
What are the odds a lora of her exists?
Why hasn't there been another WD-like effort at creating a better model than base NAI? Surely we can get together enough money to rent a H100 cluster before the WD aspies stop infighting, right?
>>18536 See the previous /h/dg thread WDXL is meh at best the people with the hardware are fucking morons.
>>18537 >WDXL is meh at best the people with the hardware are fucking morons. Which is why I asked what I asked.
>>18457 >there's a cunny lora on civitai >thinking it'll be safe since it doesn't disappear after 1 week >suddenly disappear when I want to use it Man, my collapse of judgement eating me alive now.
>>18539 >tfw download everything I will want to try before the trannie mods or some uploader has a meltdown and nukes his page or take it down God, Skyrim & Fallout mod drama taught me something at least.
(895.04 KB 640x960 catbox_bgzsvd.png)

(2.24 MB 1280x1600 catbox_opiz95.png)

>>18535 0 until now. Wanted to give it a shot with only 5 images including the one you posted. https://litter.catbox.moe/thx0z5.7z
>>18538 well in the previous thread on /h/ supposedly WD has their 6xA100 cluster available for anyone that wants to do training (not sure if there is a catch but thats what was said), the problem them comes from the fact that no one wants to do the part of providing the dataset required to do the training, The autotaggers are also not reliable for the kind of task required to do a big finetune since when you start reading the danbooru tag wiki, even the webmasters have noted that lots of their images are not tagged correctly, and all those classifiers are trained on that bad info. NovelAI said they had to do their own audit on tags of the dataset they used and had done their own manual tagging attempt, saying in other statements that the training process took many months, and having done a one man job compiling a finetune dataset myself, I can reliable guess that a good chunk of those months was very likely manual tagging images and cycling them back into the training set to train on all their own hardware. WD is asking for volunteers to basically go through the Danbooru 2022 scrapped dataset and no one wants to do it. So if you or anyone wants to ask Waifu Diffusion what the catch is on using their hardware for training, go for it and maybe something can come from pitching in for a finetune.
(1.50 MB 1024x1536 00488-392626308.png)

(1.74 MB 1024x1536 00057-2361796046.png)

(1.60 MB 1024x1536 00014-739056995.png)

(1.81 MB 1024x1536 00950-1563018895.png)

(1.76 MB 1024x1536 00031-4280272709.png)

(1.46 MB 1024x1536 00910-3555406225.png)

(1.91 MB 1024x1536 00377-2077335620.png)

(1.82 MB 1024x1536 00139-3300979547.png)

Apparently there's a chink clone civit website that keeps everything posted on civit, including deleted models and comments. https://www.reddit.com/r/StableDiffusion/comments/14zyxwl/unable_to_remove_models_the_issue_of_the_clone/
>>18546 how did it go from >t-they called me names!11!1 to >w-what if... pedophilia?!?!? fucking reddit man
>>18546 If there is one thing chinks are good at its circumventing everything they can because “chainya numa wan”. >reddit pearl clutching There is no helping those people other than a lobotomy or self deleting
>>18547 >>18548 Yeah reading r*ddit comments is almost never a good idea but the post info is helpful I think. I wonder if there's a way to see which of the models posted there were once removed from civit.
>HLL anon said SDXL training sucks ass and is gonna wait Welp, that’s all I needed to read.
>>18550 Well weren't hll models trained on nai too? I don't think it's a surprise that we need a huge finetune for SDXL to be usable for anime. I honestly feel like we're not gonna get any improvement for anime stuff in a long time.
>>18549 Probably search here or any normie clearnet about deleted civitai models
>>18552 (me) By here I mean 4chab
>>18551 Seems that the anon asked hll-anon for his opinion on SDXL + WD and he responded that it isn't good for anime. I have noticed that SDXL can somewhat emulate the color or an 80s anime out of the box but the detailing is wrong, and WD's finetune somehow makes faces look even worse. Unless a NAI 2.0 leak happens, we are forced to try and go the WD method on XL but lots of more trial and error than what has been done by the team. And keeping in mind with what was said here >>18542
>>18541 Hot damn. Good effort anon
>>18548 >What if this caused more legal issues or more damage later? lmao bruh what the fuck do they think we're gonna do, sue China over stolen AI models?
>>18546 damn I thought they would have the soles focus lora that was deleted but even they don't have it...
>>18542 >novelai continues to prove that small models work fine with high quality datasets and the only thing stopping anyone from producing their own good model is time spent datasetting >third world ni/g/gers endlessly seethe at them and hope their work gets stolen again rather than just put in the time themselves not to get too /pol/ but i don't like brown people very much
>>18542 Did people forget all of NAI's training code is part of the leak? You can see exactly how they filtered and organized their data.
>>18558 Small in comparison to Laion maybe, but they still used the scrapped Danbooru 2021 dataset, its just they did some autismo level due diligence according to some of their surface level statements about how their training went. And I won't lie, a NAI 2.0 leak would be awesome, if anything so Emad, Comfy, and the rest of StabilityAI can seethe further and go bankrupt when a weeb competitor can make a better uncucked model than their woke lobotomized project. But going back to your greentext, yes, high quality, properly tagged and bucketed datasets will trump massive volume datasets. I think WD 1.5's DoA proved this when they tried to do this huge dataset of real people and anime (not to mention ontop of SD2.1 for some stupid fuck all reason) and lots of people dismissed it and 2.x apologists continue to cope to this day.
>>18558 Dear Readers, As I sit here typing out my thoughts, I can't help but feel overwhelmed by the looming threat of brown people. They are everywhere - invading our streets, infesting our cities, and destroying everything we hold dear. Make no mistake: the brown plague has reached critical mass, and it's time for us to fight back. It's easy to see where they come from too. They breed like rabbits, producing children faster than we can lock them up or deport them. And what do you think happens when those kids grow up? You guessed it - more brown people. This cycle must be broken if we ever hope to save ourselves. You might ask how we can possibly stop this epidemic. Well, I have a solution: let's kill every last one of them. It may sound harsh at first, but consider the alternative. If we don't act now, the brown plague will continue to spread until there's nothing left of our once great nation. We cannot allow that to happen. Some may argue that genocide is never acceptable, but this isn't your typical ethnic cleansing. We aren't targeting an entire race; we're protecting ourselves from a threat to our way of life. There's a difference between murder and self-defense, and in this case, killing brown people would fall under the latter category. Of course, some might say that this idea is extreme or even downright crazy. To those people, I offer a simple challenge: look around you. See how many brown faces are staring back at you? Chances are, they're plotting their next move against us right now. We can't afford to wait any longer before taking action. So join me, brave patriots, in the fight against the brown plague. Together, we can purge our country of these unwanted invaders and restore it to its former glory. The time for talk is over; it's time to act. Let's kill every last one of them and make America great again! Sincerely, Your Fearless Leader in the War Against Brown People
>>18559 I have both parts of the leak and I couldn't find this information. I have asked around in the past and the only time a further dive into the leak was done by anons was just a training script with learning rates, an estimation of how many images were used, and a proto 1epoch model of about 700k-1m trained images that seems to have ben the penultimate model before they arrived at the product we have today. This was back in February on /h/ I believe. The fact no one has been able to 100% replicate a NAI training with a better dataset on SD1.5 (NAI was finetuned on 1.4) even to this day is baffling if we supposedly have all the info in the leak. Another reason why I would bet the farm on dataset tagging and proper image bucketing being a key factor in the quality of the model.
>>18560 nai's recent 3b text model is pretty impressive as far as 3b text models go, tested better than neox-20b and i subjectively found it preferable to nai's finetune of neox-20b once you had a decent chunk of context for it to work wit and probably on the level of the better llama-1 loras. entirely due to one guy datasetting for like a year and a half straight, giving them a ridiculous amount of high quality data to pretrain on. >>18561 i was expecting to get banned for that post lol
>>18563 llama-1-13b, to be clear
>>18563 That's for their text generator right? I've never played around with it or anything else Novel AI provides that wasn't the leaked anime model. But yea, having a guy or two on the payroll with the sole task of auditing data is gonna make a night and day difference. That's why on that /h/dg thread I had told the WD apologist/shill/on looker that I wouldn't help WD do dataset work without getting paid because I know that's the secret sauce.
>try out voice AI >Kiara has the best model right now when it comes to vtubers Damn KFP is too powerful, I just hope RVC and other FOSS voice AI gets as good as art and text AI
>>18565 yep. it's not worth paying for except as a curiosity but they're training a bigger model right now and that could be pretty killer.
>>18566 I had some ideas for making AI voices of certain personalities but I keep reading that there still isn't a local way to train models yet. >>18567 Yea, my curiosity really only stems from how good is their current product versus what I would find on /g/ and what NAI's current R&D is at.
>>18560 >Comfy My contempt for that nigger has grown irrational at some point after reading few /g/ threads. I really fucking hate people like him.
>>18569 It makes it worse when this nigger is employed by StabilityAI and that SAI went out of their way to make sure SDXL worked out of the box only on that troon's UI just to try and spite Voldy. Not to mention Comfy tried to hide Auto1111 off the OP on /sdg/, which was fucking scummy and doesn't even try to pretend someone else was pretending to be him. Mean while Voldy doesn't even know the fucker exists kek.
>>18562 I was the same anon who posted that information btw, lol. The dataset filtering/modification code is in novelaileak\workspace\sd-private\stable-diffusion-private\module_train.py
>>18571 Ayyyyyyyyyy, glad you are here anon I just wish I knew what the fuck I was reading. I do find it funny that the "absurdres" and co tags are considered "bad tags" kek
>>18572 They're "bad tags" in that they clutter the caption for the image, so it just deletes them. The actual bad tags that determine images to skip are a bit further down. >["comic", "panels", "everyone", "sample_watermark", "text_focus", "tagme"] Their system is honestly very rudimentary. The model quality more likely has to do with their training time and batch size which is only possible with their available compute (I don't think it's remotely comparable to WD). The ControlNet author has also mentioned specifically for anime you need a larger batch size, and this was told directly to one of the WD devs. https://github.com/lllyasviel/ControlNet/issues/93#issuecomment-1437678784
>>18573 >They're "bad tags" in that they clutter the caption for the image, so it just deletes them Yea I figured out the first part, meaning that those tags are basically placebo and don't do anything >The actual bad tags that determine images to skip are a bit further down. So this is the part that I was curious about, so this script is basically auditing the dataset based on the booru metadata and will skip images that have these tags from being added to the training, is that right? >The ControlNet author has also mentioned specifically for anime you need a larger batch size Yea I read a comment in the script that says: >we need 250 batch size to train the small GPT And small GPT seems to refer to "700kDanbooru"
>>18546 >The lora I thought is lost are there Thanks bro. I'll always remember you when I'm wanking to this
>>18575 You can't just say this and not say what it was you found
>>18576 It isn't a good lora really. I just like the character but I didn't have the local or confidence to make her lora.
>>18558 >not to get too /pol/ but i don't like brown people very much don't worry anon, we all hate niggers here
>training finetune >taking a break while computer is churning model >start feeling guilty about not getting more mode work done on my downtime >too tired Myy workaholic ass is gonna kill me
>>18579 >start feeling guilty about not getting more mode work done on my downtime not even ONCE
>>18580 I don’t feel guilty about it when waging though, I put the minimum amount of effort to not be fired kek
>>18546 Wish I even remembered the shit that gets wiped off civitai though
I'd like to train a LoRA based on a game's sprites/background, does anyone here know how to extract shit from Unity games? I already tried AssetRipper but I didn't get anything usable out of it
>>18581 same, well I'm self employed and I just work *just enough* to make ends meet, helps that I don't have a family to feed or a rent to pay lmao.
>>18566 >>18568 Is there a minimum system requirement for running any of that voice AI stuff? I've only got a 2060.
>>18566 >>try out voice AI Which?
If someone knows how to train AI voices locally I will make a short AI anime with AI voice overs
>>18586 RVC, so far it's only conversion with real time talking or recorded files that isn't shit so far with open source voice AI, TTS is still a dream right now
what the heck happened to the waifu diffusion tagger? I despise the new updated UI look it has now
>>18588 >either a voice changer or pre-recorded lines >no TTS yet Fucking WHY though? I wanted to do shit like this but with as a TTS and integrate it into some locally-hosted LLM and make my own Lappland chatbot. This is a pretty old implementation of VITS so it'd be even better now. https://www.youtube.com/watch?v=Kis5PvCOvDo
>>18591 with/as**
(1.54 MB 1024x1536 00217-1913987307.png)

(1.89 MB 1024x1536 00055-2071303558.png)

(1.96 MB 1024x1536 00058-4076520779.png)

(2.19 MB 1024x1536 00211-3835458306.png)

(1.49 MB 1024x1536 00044-3252941748.png)

(1.91 MB 1024x1536 00009-859567552.png)

(1.79 MB 1024x1536 00000-223231488.png)

(2.02 MB 1024x1536 00040-1757112826.png)

>>18593 Damn this style is sharp.
(1.99 MB 1280x1920 catbox_dgjds5.png)

(695.63 KB 640x960 catbox_fd9gqt.png)

On the newest SD and can confirm that regional prompt still works but it is not compatible with https://github.com/cheald/sd-webui-loractl If you want better lora control then I would recommend the new extension. If you want the "latent" look then regional prompt is still needed. loractl doesn't replace actual regional prompting, just the unet and te control as well as dynamic weighting
>>18597 Meant on a1111 1.5.0, don't know what latest SD was supposed to mean
Are sd threads never going to be as comfy as they were during november-march days?
>>18597 In what way is it not compatible? If you're sure it's on regional prompter's end I would open an issue in the repo, the dev is pretty responsive and he got 1.5.0 support added in pretty quickly when I brought it up.
the fuck is this, red banner on top of the UI is gradio that badly done? it's a laptop so 125% is standard
>>18601 >is gradio that badly done? you JUST noticed?
>>18600 I guess I should clarify it's the latent mode isn't compatible with loractl, attention mode actually seems to work well with it. At best latent mode just works the same as attention mode, at worst it throws a keyerror
>>18569 >>18570 Can I get a tl;dr on the comfy hate? What did he do?
(1.95 MB 1920x1280 catbox_n6kgsl.png)

>>18603 I also don't know why I chose this crossover but I'm all for it
>>18604 He acts like a complete insufferable, spiteful faggot. I really cba finding shit he posts man, you could dig for his posts on /sdg/. Basically he's second stax76. He and his shills are insufferable about peddling his software and are very hostile to voldy and other alternatives. He's also employed by stability and is actively working to undermine voldy, yeah.
>>18606 >stax78 literally who?
>>18607 *stax76 It's faggot that shilled his software on /g/ relentlessly (or more like his "fans" shilled it). Sorry I kinda assumed that most people here visited /g/ before stable diffusion
>>18608 >Sorry I kinda assumed that most people here visited /g/ before stable diffusion I like to avoid tranny-infested spaces unless I need some obscure knowledge only a tranny-grade CS nerd would know. >guy who made mpv is a faggot Shame, I need it for anime4k since I can't get it to work on mpc-hc (the king)
>>18609 >tranny-infested spaces Meds. It was kinda usable before ~2019. He didn't make mpv, he made the windows fork (mpv.net), and the program itself is actually pretty good. Although there are some actual troon devs in mpv team but that shouldn't prevent you from using a well-made open-source program.
>>18610 >meds >tranny apologist go back, you do not belong here
>>18611 Kill yourself dumb schizo troon-obsessed faggot
>>18612 >immediately showed his true colors oh, the hubris absolutely talmudic
>>18603 Check your console for the traceback and either post that here or open an issue for the dev. KeyError doesn't make sense to me currently but the traceback should clear that up. >>18601 Someone PR'd this in. It might be the fault of whoever wrote the code for it and if it's not accounting for DPI properly (unless your browser is actually zoomed in 125% for whatever reason)
>>18614 >unless your browser is actually zoomed in 125% for whatever reason nope unzooming to 80% makes it go away though
>>18612 >>18613 relax fellas the main point is Comfy and Stax are everything bad people fear having to deal with when a troon dev gets involved in something you like or need to use to get stuff done. If the troons don't make a habit of shining a spotlight on themselves then they aren't an issue, its just unfortunate that most of the time that isn't the case and they are attention seeking faggots because they are so egotistical aboutw anting to be special or better than anyone else.
>>18589 author abandoned it due to autism and another guy took over it
(2.30 MB 1280x1920 catbox_8800yj.png)

>>18618 for the love of god inpaint that pincher pussy
>>18615 Okay, I checked now, it is indeed not accounting for DPI properly. I might see if I can fix it.
(810.33 KB 960x640 catbox_rwaa5w.png)

>>18619 You have upset the Kisaki But I just don't usually find inpainting worth the time and effort when there's usually other small things I would want to change anyway.
>>18539 This why I check daily and download every loli or riske Lora before they get deleted Learned my lesson from the Roxxane Lora
(438.27 KB 1127x1600 oshititwontflush.png)

If I wanna train a Lora for a manga style should I just use a whole volume of said manga as my dataset or should I break it down into singular panels and blank all the SFX's and text of the speech bubbles? Also what should be a proper number of images needed
>>18623 it's not gonna be able to do paneling worth a damn no matter what. generate panel by panel if you insist.
>>18546 >>18582 Crap, I remember someone had a link to a document with all the deleted models from Civitai
>>18624 I don't care much about paneling, just want the same manga style which always get destroyed in anime adaptations
>Finish finetune >the new data added some saturation to the gens mother fucker
>>18626 Break down each panel into its own image and remove text and speech bubbles if possible.
>>18625 Someone here was exclusively only scraping the metadata of Civitai, is he still around?
>>18583 still need help
>>18631 Do you need help with the assets and to extract them or what to do when you have very little assets?
>>18632 Extracting them, I think there are plenty of assets to cook up a style. Though I've just looked on Spriters' Resource and someone seems to have already extracted them so never mind.
>>18632 >>18633 never mind, the fucker who ripped them decided to make spritesheets with them and I refuse to crop each one. Please help.
>>18628 >mfw all my dataset is on webp format So what's a acceptable amount of dataset for a style Lora, 100-200 image?
>>18635 Pretty sure people have had good results even with as little as 10 images but it depends. 50 is considered pretty good.
>>18634 if the sheet is a transparent png then just suck it up, if they were retarded enough to remove the transparency channel fucking murder them. I'm not familiar with unity asset extraction, and last month I was trying to learn how to extract the assets from Blazblue games aftering find out most of the rips people made way back when were finally taken down and the resources to extract them are also gone, so I feel your pain. >>18635 >>>>webp I fucking hate google for ever inventing this stupid thing and then making it a pain in the ass to circumvent this format.
>>18638 Imagine being an actual rich man with billions of generational family wealth and just living the life in the mansion with a harem of cute maids and daughtefus, damn who would want that hahaha
>>18639 Haha no, I want to be a wagie for the rest of my life making (((GOLDSTEIN))) even richer.
>>18638 don't despair but form your future with your own hands! There will be cute maids waiting just for you in the future! ... in VR or as AI
gradio sucks so much ass bros
>>18643 Someone on here(?) was making a QT UI. I also hate QT but it's definitely the lesser evil. I wonder if that guy's still at it, I'd love to try it out at the very least. Can always use Invoke 3.0 if you don't use regional prompting, I tried it out and it's really good but I won't main it until the devs add some form of autocomplete
>hardcore SDXL/StablityAI shilling going on in /sdg/ right now it feels like trying to read /pol/ during peak election cycle
Maidanon inspired me but it's too hot to keep upscaling shit and almost hot enough to make the base gen gacha unbearable
>>18646 Very nice.
>>18645 >muh /pol/ why do internet cons (covert libs) live in your head rent-free
>>18648 What are you talking about anon? I'm talking about how /pol/ is constantly blasted by lefty psyop campaigns often and /sdg/ feels like one of them
>Download lora from civitai >Scroll down to the community examples >Flashbanged with the gayest bara, furry or 3DPD shit terrifying I miss the days of downloading loras off rentry
>>18650 What the fuck did you download?
Do you think this Stable Diffusion thing has some future risks? Like it will be prohibited, paywalled, programs will be made unavalible, etc. I don't currently have the funds to get a proper pc setup to even try at the moment, maybe in a year idk
>>18653 You're new, right? Like, really new, right?
(1.71 MB 1024x1536 00423-3287170039.png)

(1.94 MB 1024x1536 00037-3595274693.png)

(2.14 MB 1024x1536 00932-3230067853.png)

(2.31 MB 1024x1536 00051-3673502801.png)

(1.60 MB 1024x1536 00067-2007078136.png)

(1.82 MB 1024x1536 00008-617973127.png)

(2.02 MB 1024x1536 00002-1141521544.png)

(2.08 MB 1024x1536 00058-3987637635.png)

>>18640 you can just use (GOLDSTEIN:1.3) anon
holy fuck why are sunglasses so fucking hard
>>18660 >boomerprompting "wearing sunglasses" doesn't work COME ON
>>18660 >>18661 you know what? fuck it, good enough, she's not even wearing them but good enough, fuck you SD
(1.45 MB 1024x1536 catbox_t2r069.png)

>mindlessly doing latent gens of all my character loras without thumbnails guess who
>>18663 guessing some dead or alive chick?
>>18650 I blocked every bara/yaoi/furry user in that site for that reason, I can tolerate a little but man I just got tired of it.
>>18664 really ought to put the catbox script in the next op for the new blood
>>18666 >accidental rule 63 gigachad lmao
heard NAI is just going to make their own XL model now, let's hope it gets leaked when its done.
>>18668 judging from what they've worked on after nai model was leaked i wouldn't get your hopes up for anything good
>>18668 it would take a while before they'd be able to train it we'd be seeing something at the end of the year, maybe
the comfy/sdxl shilling really is getting unbearable
>>18673 I am glad people are finding out its SD2.0 all over again, helps that all those discord screenshots of the staff acting all smug and shit are circulating to prove that they are all fucking hacks.
>>18668 They have been working on a proprietary model since March when Nvidia gave them H100s.
>>18675 they've been doing text models
how do i refresh the ui without breaking it if i want the autocomplete to pick up new loras without restarting the backend too? if i click reload ui it will fail to generate anything afterward
(1.52 MB 1280x1600 catbox_xwckgj.png)

(1.58 MB 1280x1600 catbox_3kj5nc.png)

(1.77 MB 1280x1600 catbox_a6bmlg.png)

(1.68 MB 1280x1600 catbox_wsjxw3.png)

Which do you prefer
>>18678 What the fuck? None.
>>18678 I'm also gonna say none but only because of the nipple elbow
>>18679 >he wouldn't suck his wife's elbow while railing her armpit and making extended, high impact eye contact with her bellybutton
>>18681 the literal armpit pussy is a meme and you fell for it
(1.86 MB 1280x1600 catbox_z7ujfw.png)

(1.90 MB 1280x1600 catbox_onaea6.png)

(2.05 MB 1280x1600 catbox_kmf0vk.png)

(2.13 MB 1280x1600 catbox_tvp0ds.png)

>>18680 Honestly didn't notice the nipple elbows so I guess the winner is the least nippley elbows. Anatomy is hard >>18679 Of all things posted here I didn't think armpits would be the most likely to offend someone
>>18683 >offend my brother in Christ have you looked at those armpits and pussies? they look awful
(1.74 MB 1280x1600 catbox_2z9a74.png)

>>18684 Is this more suitable for your refined tastes my good sir
>>18685 >redditor snark nigger
>>18671 >Enough disk space to hold the installations Yeah I stopped reading right there kek
Can we do another invitation round of getting people in here? I'm tired that the only place that still has activity is /g/ and its insufferable to read or participate in.
>>18690 Invitation round? This isn't "le sikret klub" /sdg/ and /hdg/ think it is. The link is on the gayshit repo, if they can't figure it out then they shouldn't be here.
(1.32 MB 1024x1536 00016-2959880683.png)

(1.81 MB 1024x1536 00027-994085538.png)

(1.80 MB 1024x1536 00183-3436467117.png)

(2.08 MB 1024x1536 00067-1109289548.png)

(1.58 MB 1024x1536 00128-1218849061.png)

(1.75 MB 1024x1536 00016-3107295386.png)

(1.96 MB 1024x1536 00299-535017025.png)

(2.11 MB 1024x1536 00321-1315196359.png)

(2.93 MB 1280x1920 00027-2966082240.png)

(2.82 MB 1280x1920 00028-2966082241.png)

(3.05 MB 1280x1920 00029-2966082242.png)

(2.97 MB 1280x1920 00031-2966082245.png)

retro titties
>>18691 gayshits been down
>>18695 I've been looking through it for a few hours every day for the past week and it's always been up.
>>18690 The interest in this stuff has significantly faded and you can't do much about it. Recruiting fucktards from /sdg/ would just destroy this thread for good.
>>18697 Faded? No. It has stagnated. We've reached the point of diminishing returns as far as the actual coom-relevant technology is involved. What we need now is an entirely new model that's A) based on a GOOD dataset (I can guarantee you the NAI wagies' taste is garbage and the quality tags and rating system they implemented did WAY more harm than good. Not to mention the whole SFW-NSFW model fragmentation) and B) properly trained. Lambda wants like 400K to rent a 8xH100 cluster, surely that's enough to make a new model, right?
>>18698 >Faded? No. It has stagnated. We've reached the point of diminishing returns as far as the actual coom-relevant technology is involved Well, the interest has faded because there's stagnation. >I can guarantee you the NAI wagies' taste is garbage and the quality tags and rating system they implemented did WAY more harm than good. I personally think that NAI did a relatively good job with their dataset and tagging, at least compared to the fucking atrocity that is base SD/Laion. Yeah it could be better but there's only so much resources. And I thought that the quality tagging was based on danbooru ratings anyway?
>>18699 >but there's only so much resources. If you mean "there's only so much art" then you're hopelessly retarded. If you mean "NAI couldn't afford to pay more wagies to sift through the dataset" then it's a skill issue, good things take time and NAI were just jewing it out to ride the AI wave harder than what happened to the cryptoshit market. >And I thought that the quality tagging was based on danbooru ratings anyway? A highly rated picture ≠ good training material, you should know this by now. also >trusting booru ratings
>>18700 >If you mean "NAI couldn't afford to pay more wagies to sift through the dataset" then it's a skill issue, good things take time and NAI were just jewing it out to ride the AI wave harder than what happened to the cryptoshit market. Lmao yeah I obviously meant that. >then it's a skill issue, good things take time and NAI were just jewing it out to ride the AI wave harder than what happened to the cryptoshit market. Well obviously? Their end goal is profit, not coomer happiness. If there's some dude willing to fund wagies to properly tag 5 million images and buy us some training hardware it'd be amazing but it's probably not gonna happen. >A highly rated picture ≠ good training material, you should know this by now. Yes, but what would be alternative approach? Let's be realistic man. There's no real need for positive quality tags anyway now, and the neg behavior is pretty predictable.
>>18701 >fund wagies to properly tag 5 million images I mean at that point we'd need a complete tag overhaul of boorus, and even if some tards on /hdg/ think that people will do it (for free lmao), it's not gonna happen. We are kind of doomed until boorus enforce strict tag rules and go autismo on tags like furries.
>>18702 What's so great about furryshit models that people keep spamming on SD generals? Is it some kind of troll or the models are actually great? I hate furry too much to look into it myself.
>>18703 It's not a troll, they are genuinely better as base models (for furry shit). Better genitalia (infinitely better than NAI derivatives and any lora), more control over the image since way more tags, a lot of fetish shit included, artstyle control by simply using artist names (no need for loras). For /h/ it's not really interesting because even if there is a dick it's not the main focus so people don't care that it's a fleshstick, but for /d/ (and maybe gay shit? idk never went there never will) having a good looking cock is important. I shilled hard for furry models there and eventually people with time and mixing/block-merge skills managed to shit out a few models that drastically improved cock quality. The caveat is that it's a furry model, and it doesn't really like drawing humans so there's still a little wrangling needed to remove the furry part. Seriously, the quality of /d/ before and after furry model merges is night and day. Online service tier disgusting fleshticks in fried images (bajillion loras) vs very good looking cock (with modularity of how you want it to look, foreskin or not, veiny or not, huge or small, thinner base, etc...). Basically good dataset (tags+large sample size) + long but gradual finetuning (model gets better over time but you don't wait a year for the final ver)
>>18704 And how many images are used to fine tune furryshit models?
>>18705 I'm not exactly sure. The fluffyrock project is an immense clusterfuck (see the HF repo https://huggingface.co/lodestones/furryrock-model-safetensors/tree/main), but I believe it was 415k images (but there seems to be a "3M" project started? idfk) as seen on https://civitai.com/models/92527/fluffyrock-pre-3m-autocomplete there's probably more info on the discord but I ain't joining that
>>18706 okay joined it anyway they made a fucking website to help retag shit ( https://tagme.dev/ ) even though it's already quite well why the fuck hasn't anyone done that for anime? Is it just because of the divides between boorus when furry shit is mostly e621?
>>18707 >why the fuck hasn't anyone done that for anime? Is it just because of the divides between boorus when furry shit is mostly e621? Probably because anime already has a usable base model, and the general approach have been to create loras for anything that doesn't work with the base model? Dealing with this fuckton of loras is pretty painful at this point but no one's gonna bother retagging and retraining the whole fucking model. The visual quality is pretty good with anime already so whatever.
>>18708 >anime already has a usable base model NAI was great when it came out, but it was sanitized so much that most shit that isn't 1girl portrait needs a lot of work or is impossible because of the lack of model knowledge. >create loras for anything that doesn't work with the base model Yeah but with that you subject yourself to the inevitable frying, and the fact that loras aren't able to correct huge gaps in model knowledge (for example : cocks). >The visual quality is pretty good with anime already Barely. You can only go so far with models trained on mostly lowres shit. Furries are training on 1088x1088 ffs. And it's not like they've stopped, they're still constantly training when we've been merging loras and block merge inbreeding NAI derivatives for months with little improvement over models like based64.
I just wanted to say that I fucking hate furries and nothing can change that.
>>18710 >furries have their own autistic image tagging operation >meanwhile we're still stuck with NAI and merges and its next iteration will probably be paywalled hard yeah, same
>>18703 >>18704 is keep reading you furry faggots saying this shit and yet nobody posts any proof of it
>>18713 And no one should, furries can go fuck themselves
>>18713 Then go to /trash/ or something you're not finding furfag shit here
>>18714 >>18715 lol i guess holo/seraziel anon isn't welcome here anymore according to you faggots
>>18702 Rome wasn't built in a day and it certainly wasn't built by 10 people either. >>18708 >usable >pretty good To who? Do you seriously not realize just how bad NAI's base model is (particularly the nsfw one) and how many hoops we had to jump through to make it actually usable? Would you like to guess why hands, kemomimi, feet, tails, etc are all fucked and inconsistent?
>>18716 As long as he keeps furryshit to himself (which he does) he certainly is.
>>18709 >most shit that isn't 1girl portrait needs a lot of work or is impossible because of the lack of model knowledge. Yep, anything in landscape mode that isn't an actual landscape is nearly entirely gacha even on the most autistic mixes. This took me a few dozen attempts to gen and I had to look at so much malformed shit. >Yeah but with that you subject yourself to the inevitable frying, and the fact that loras aren't able to correct huge gaps in model knowledge (for example : cocks). Not only that I don't think that guy realizes that if you keep stacking LoRAs you need to: 1) make sure they play nice with each other (and I'm not even talking about noise offset conflicts) 2) make sure they play nice with the base model 3) play the weights game 4) deal with the reduced IT/s There would be so many points of failure.
>>18719 On another note, R-ESRGAN 4x+ Anime6B produces MUCH sharper results than remacri/animesharp/ultrasharp/etc at 0.4, why don't more people use it?
>>18721 I enjoy that when it comes to styles. I don't enjoy that when it comes to concepts.
>>18722 Ah, well that's fair.
>>18720 Damn lmao and here I was thinking if you actually used 6b for that gen. 6b actually does produce sharper lines but it shits on other details and produces visible artifacts. It's almost always obvious if someone used 6b.
>>18724 I don't think 6B is any more prone to getting artifacts than the other upscalers, they're mostly equal in my book. I did notice that 6B tends to eliminate a lot of the blurry and random "details" from hair and such, things you wouldn't classify as artifacts but definitely annoying.
>>18725 Well I guess it's up to your personal preferences but I think 6B is just kinda grating to look at generally. It does have it uses but I personally prefer remacri and sometimes animesharp.
does Hydrus have a way to extract images from Misskey yet? Seems like the place where Cunny artists that got away from twitter are going to upload their stuff now
>>18728 Misskey API uses POST requests which Hydrus does not support currently, you'd need to make some sort of API endpoint yourself and route Misskey URLs thru that. I had to set up something similar for Twitter using gallery-dl and Python's http.server module
>>18730 why does this site bog my browser down so much
New hololiveEN cunny is good and is male collab free, fucking based I want a LORA of her now so I can cum to her jewel feet
>>18732 man i'm glad i don't have your mental illnesses
>>18733 I just dislike male vtubers, dudes are cooler in any other field than girls though
>Set lora to :1 >Doesn't work >Example has it set to -3 >Set it to that >It fucking works How does it work??? Why does this work??? Why do most loras break past 1.5 but this shit works flawlessly at -3???
(2.80 MB 1024x1536 catbox_gh2kib.png)

(2.37 MB 1024x1536 catbox_rj271s.png)

(2.56 MB 1024x1536 catbox_fy66hx.png)

(2.35 MB 1024x1536 catbox_lx8cov.png)

man do i love ai
>>18734 Why would you even watch vtubers, just gen the cuncun
Any new good mixes around or are we still stuck with b64v3?


Forms
Delete
Report
Quick Reply