/hdg/ - Stable Diffusion

Anime girls generated with AI

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 15.00 MB

Total max file size: 50.00 MB

Max files: 4

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

/hdg/ #17 Anonymous 08/16/2023 (Wed) 22:03:41 No. 19952
Previous thread >>18739
was about to hit post >>19951 I hope so the 120 pages of fat gura won't host themselves
I have migrated
(1.35 MB 3072x1720 grid.jpg)

>>19954 The extra noise parameter I added is too kino. Those extra streaks of water on her body...
>>19955 Damn, it does look pretty nice, and doesn't seem to fuck up anything.
>>19957 you're getting there champ
https://www.pixiv.net/en/users/54698934 Anyone have a LORA of this artist or did anyone ever make one?
one more time, I promise, just testing the catbox script for something stupid https://files.catbox.moe/uq4g2v.png
>>19962 alright, didn't seem to work, but if you wanna see what I was doing, drag the image into PNG info
>>19959 Not mine but here's a higegepon lora https://files.catbox.moe/f4f709.safetensors
>>19964 Based, going to try it out later.
(67.71 KB 466x571 msedge_Fecmfp1rWm.png)

THE ABSOLUTE STATE OF LTT SHILLS LMFAO
>>19966 Unironic coomsoomer brainrot
>>19967 >n-noooooo you're harassing a poor millionaire!11!!
>>19954 >migrated pls explain I'm out of the loop
the amount of seethe that sd still is able to milk from normalfags is astounding man. even on freecodecamp video there's faggots screeching about muh starving artists and stealing. I wonder how this whole shit goes long term
did you invite /g/ in here or why has this place become so cancerous also >anonfiles talk last thread that site was hosting tons of CP since TORfags were using it
>>19972 When you get tired of watching sd/g/ you kind of want to check out what else is going on. And yea, anonfiles was being used to host questionable shît. Even the forbidden model was on there at one point.
>>19972 This thread is an offshoot of an /h/ thread which was an offshoot of a /g/ thread, I think most of the posters here originate from /g/.
>>19972 Some cunt probably linked it on /g/ or some faggot discord. >>19974 People that currently reside in /sdg/ general are mostly completely braindead niggers and pajeets, it wasn't nearly as bad few months ago.
>>19955 So what is it exactly? I read the PR and it seems like CN tile.
Why do people keep training anime on SDXL
>>19977 civitai contest
>>19977 >>19978 pretty much this, just sunk cost and sour grapes
>>19976 Well, I made the PR and that's not really what it is. That being said you actually could use this with CN tile if you want to, it might actually improve it. https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#extra-noise I also made a PR and script for masking the area it adds extra noise to (for hires fix). Later I was going to see if I could hack it together with DAAM or however Regional Prompter calculates the attention area so you could just prompt which area you want to be masked, but I honestly still prefer doing it manually. https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12616
>>19980 Doing gods work anon!
>>19980 did you have any prior ML/torch knowledge before sd? can someone who knows somewhat basic flask python learn and contribute in reasonable time frame?
>>19982 I had none other than screwing with ML projects for years just for fun (GPT2 days). Also I frankly still only have surface level knowledge of SD. If you look at the PR that implemented the extra noise it's actually incredibly simple. Anything outside the pytorch stuff shouldn't be too difficult to pick up, depends what you'd want to contribute I guess.
>>19983 I see, thanks. >depends what you'd want to contribute I guess Dunno. I kinda have too plenty of free time currently and most likely will for a few months, so I just thought it'd be cool to do something useful with this.
I am going to try looking into the NovelAI VAE issue more, because it seems like it only happens in hires fix, not img2img, which logistically doesn't make sense because they're the same operations. I first want to make sure it's consistent across systems though. Can someone confirm they can generate this image fine, but using hires fix causes the NaN issue, even with the VAE in full precision (--no-half-vae)? The only hires fix setting that seems to matter is upscale factor, it starts failing at 1.15 for me. --no-half actually fixes it entirely but the whole model shouldn't need to be using full precision, might be a clue to the issue though. https://files.catbox.moe/74wkie.png
>>19985 Would be easier to help check if you catboxed one that doesn't use two loras.
>>19986 If you can make an example without LoRAs then please do, that would help. Here's the links incase you need them: zankuro https://litter.catbox.moe/g9rgm1.safetensors ponsuke https://litter.catbox.moe/h2b8wj.safetensors
>>19985 >>19987 Also I think I already am making some good progress, it seems like it's something going wrong with the image that gets fed in for conditioning in img2img, not the latent
Is there no latent upscaler apart from basic shit? How difficult would it be to train one
(1.94 MB 1024x1024 catbox_x8j7oj.png)

>>19985 Works fine for me. >>19990 >but I can't find the setting that enables the inclusion of VAE hashes in the metadata. That's one is still in the dev branch iirc.
>>19991 Forgot to mention I use --no-half-vae.
>>19990 >>19991 Well shit. Yeah, that's the correct one. I just used the metadata from those upscales as a sanity check too and it still fails for me. What's more weird now is I don't think it's the latent or the image conditioning necessarily. It's consistently failing within the decoder upscaling, specifically here: https://github.com/Stability-AI/stablediffusion/blob/cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf/ldm/modules/diffusionmodules/model.py#L634-L641 I had more to write here but I might actually be onto something now, hopefully I have a better idea of what's happening soon
>>19925 you living the dream, anon. just make sure there are no reflective surfaces behind you on those conference calls.
>>19993 Can you post/verify the vae sha1 hash? I got the one from the old torrent like >>19990, same hash too.
>>19995 Well shit, you did verify, I'm blind.
>>19985 Will verify later but I get NaNs even on normal gens, is that fixable without forcing me to use nohalf? Cause that one doesn't work either
>>19990 >>19991 I think I narrowed down the issue -- if one of you are willing, could you checkout the dev branch and see if it works or not? If not, how you could help instead is find some prompt/setup on 1.5.1 that produces NaNs. Maybe it was somehow completely resolved as of 1.5.1 though and the upcoming 1.6.0 reintroduced it. Specifically after doing a git bisect this is the commit that reintroduces the issue for me: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/cc53db6652b11e6f7bca42c3aa93bd6761ed3d3f
>>19998 Catbox one of your gens and I can check. Ideally it should have no reliance on 3rd party extensions (built-in are fine). Will need commit hash too.
>>20000 Will send one later. I'm on an old March commit but stopped using the NAI VAE in January cause the NaN errors were frequent enough to bother me even with nohalf and even when generating without hires/img2img
I'm on 22bcc7be still, anything I can do to help?
>>20002 Make some prompt setup using only built-in features that causes NaNs with the NovelAI VAE even with --no-half-vae. Using --no-half fixes it entirely in my tests but of course that's stupid to make that necessary.
(45.72 KB 905x746 2023-08-17_13-48-06.png)

>>20002 >>20004 Actually, if you can do it even without --no-half-vae, that would be even better, but you need to confirm then if it happens in img2img. You can make img2img behave identical to txt2img if you set it up like picrel. If you're on 1.5.0 or above you should disable the "Automaticlly revert VAE to 32-bit floats" setting too, otherwise it forces the webui to use full precision until it's restarted.
>>20005 >>20004 I do not have the automatic vae setting and never have --no-half-vae on. So make a gen until it gets the NaN error, then try it again in img2img?
>>20006 Yeah your version is hella old so it won't have that. >So make a gen until it gets the NaN error, then try it again in img2img? Yes. If it still produces NaNs in img2img then that very likely indicates it's the VAE and nothing can be done. If it doesn't then something is wrong with txt2img.
>>20007 I'll get to work, This may be some helpful info, but one thing I noticed is that when I start doing img2img work, whether upscaling or inpainting, NAI.vae will start acting up and shit out NaN errors almost consistently and I then have to switch vae.
>>20008 Ok, so I managed to get an image that NaN'd out in t2i, and then I recreated it on img2img with the settings you provided and it generated just fine, should I give you anything?
>>20009 Catbox the txt2img one (just switch out the VAE temporarily so it can actually output the image)
>>20011 oh wait nevermind, I just realized I did the img2img wrong and left the resolution at 512x512 instead of doubling it to 1024x1024
when I recreate it in img2img I get a completely different result
>>20010 Ok, so not sure if this would be helpful but I managed to generate an image that NaN'd without hires on NAI.vae, and attempting to recreate it in i2i inpaint also NaN'd/Black Squared https://files.catbox.moe/s2qi6h.png original t2i https://files.catbox.moe/ns0vxp.png kl-f8-anime2 https://files.catbox.moe/7bvs8m.png original i2i So I take it this may not be good news
>>20011 >>20012 >>20013 Thanks, that's helpful, I was able to reproduce the NaNs too. I actually forgot to mention in >>20004 what I really want is an image that produces NaNs _without_ hires fix, because this seems strictly limited to hires fix so far. Is your exact commit 22bcc7be like you mentioned? If that's the case I'm gonna have to do another git bisect because it seems like voldy fixed it at some point but then fucked it up again. Reason being is that it works on the release commit of 1.5.1 and now my PR's attempt to fix it (https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12624), but does NOT work on the dev branch and (presumably) the very old version you're on. >>20014 Yeah this is what I really wanted, cheers. I can confirm that one fails in fp16 but works in fp32. This gives me enough to start looking into a possible root cause.
>>20015 Maybe the last thing that would be useful is an image generated without any special prompt syntax (including weights) and without LoRAs, and on the base NAI model. Reason being is then I could compare with Naifu when it's in fp16 mode and see if it craps out or not.
>>20017 Doers special syntax include BREAK? Because I just got a black square after removing everything else
>>20018 >Doers special syntax include BREAK? yes because NAI never gave the user the ability to delimit the token chunking like that
>>20019 ok, prompt is currently >scan, traditional media, sketch, extremely detailed CG unity 8k wallpaper, hakurei reimu >worst quality, low quality everything is on euler a defaults no hires, gonna run generate forever and see if I get anything
>>20020 bless u, anon I have a hunch NAI just forced fp32 on their servers to work around this but I'm hoping that isn't the case. If it is then they literally did just do some DK64 expansion pak type shit to work around it.
>>20021 it's been almost 10 minutes and 300 images later and I haven't gotten a single black box/NaN'd error. I think these special syntaxes are part of the problem
>>20022 (nta) it could be some numerical error when adding two different prompt chunks (idk what's the actual name, not deep enough in this) where a "zero" value works but zero + zero != zero and this blows up somewhere this is probably just schizothoughts though
>>20023 you mean two different 75 token chunks?
>>20022 You're free to try weighting stuff then, I'll just have to double check the weights end up matching in Naifu. Anything else is basically off the table though.
>>20024 yes, since that's what BREAK is supposed to do
>>20023 It won't be this because the part where it breaks is upsampling the latent -- tokens don't matter at that point. It does influence the latent result of course but the numerical error is occurring way after tokens are handled
>>20025 yea lets do it, let me know what you need
Apparently the sharding is also usable on inference, so multi-gpu/tpu inference with low vram (they say 8/12gb) for SDXL seems possible So you don't need a giga gpu, but multiple smaller ones seem to work
>>19999 Do you still need this? I don't want to copy a whole install again for nothing.
>>20030 nah I figured it out for the dev branch, at least for now. all that remains is 1) is there any simple prompt without LoRAs on base NAI that produces NaNs while using fp16, so I can compare to Naifu, and 2) if there are any that somehow still produce NaNs on fp32, if casting the VAE to bf16 fixes it (this doesn't work on dev currently but that is for unrelated reasons)
hdg is under comfynigger attack
>>20031 >(this doesn't work on dev currently but that is for unrelated reasons) meant to specify this is for fp32. I know bf16 requires Torch nightly and I have tested it before to work, just not sure if the NAI VAE has any issues with fp32 or not.
>>20032 seems to have calmed down, shill is probably on lunch break
>>20032 and I apologize I'm cluttering this thread with technical VAE shit so here's a pic
>>20035 It's all good, I gain a lot from having a fixed NAI.vae platform so I'm happy to help with any more things.
>>20034 they probably gonna try grooming people in other generals, wonder if we're gonna see those niggers pop up in vtai and trash lol. >>20035 where would you even talk about sd tech? this thread is perfect for this. Thematically sdg would be the best but it was overtaken by comfynigger. also cute Marisa
>>20037 There is at least one person on /vtai/ using comfy but they haven't done anything disrespectful (yet) so I've held back from calling them out
>>20038 yeah just using it is certainly fine, but fuck comfygroomers
>>20031 Well I was already halfway through the setup before you posted so I did it anyway and it does indeed produce NaN errors with or without --no-half-vae (actually a black image without and straight up nothing with).
>>20037 >also cute Marisa ty. I stopped visiting /sdg/ around December or January, now I just open the thread to entertain myself briefly for a few minutes every week or so. Looking back I think I really only enjoyed it until about a week after the NAI leak settled. /hdg/ started trailing off after December too but at least it isn't like /sdg/ yet. >>20038 I don't even watch vtubers but /vtai/ is consistently cozy, dunno how they do it. Granted I'm rarely in those threads but anytime I take a peek they're sane
>>20040 Okay, that part is consistent then, thanks for verifying. When voldy is online later he might know what's going on. He did some shit to support SDXL with hires fix but I can't tell why he did it the way he did.
>>20035 you're doing god's work, anon. please continue
>>20041 /vtai/ is the only comfy place on /vt/, the rest of the board is a schizophrenic psychological battlefield (they make neo-/sdg/ look like a joke), but the trouble makers want maximum coverage so they avoid a small coom hangout like /vtai/ and instead go after the global generals or make attention grabbing threads to trigger fanbases.
>>20041 >>20044 yeah I don't care about vtubers in general but the vtai is nice. I really wish there was a good anime general suitable for ecchi/sfw posting just like vtai.
>>20046 /e/ is the closest thing to that unfortunately, /a/ would shit a brick if we tried making a rotating AI thread
(2.16 MB 896x1536 catbox_oltr5g.png)

is it a good time to pull
>>20047 /e/ is absolutely dead. a would be nice, yeah, but their artnigger circlejerk is certainly not gonna let this work
>>20045 might just be an autistic anon that watches PRs like a hawk (like me) >>20046 >>20047 /e/ is fine and I don't even mind it being slow but there's a lot of newfags compared to /vtai/
So what are we looking at now with the vae testing? I'm pretty much from all day other than doing image crops that I can take a break from anytime.
>>20051 (Me) >pretty much free all day***
>>20051 >>20052 see >>20031, maybe skim the thread a bit too tl;dr just seeing if novelai vae is repairable or not
Finally found a nice tool to edit png metadata without doing exif shit or using inpaint with no mask at 0 denoise to insert whatever I want. It's called tweakpng. It also has an option to add it the context menu, so editing is fast and easy.
>>20054 damn I should've just mentioned this months ago. I've had it installed for years lol
>>20056 Extremely fucking based, on my todo list now
>>20058 >>20057 >>20056 kek it was my homura pic guess that guy is gonna get a wojak instead of catbox
>>20060 You can also copy paste a tEXt block from an image that has metadata and edit.
be careful, that SAI troll sometimes stalks this place, he may tattle on us kek
can't wait for you fags to start crying about how bad this place has become once you're done stepping in shit on /g/ and then tracking it all over the carpet here
>>20065 womp womp go back
>>20066 zoomie doesn't understand what goes into building and maintaining worthwhile places, treats nice things as a toy, ends up with nothing and cries about how he's been done wrong by the world. many such cases.
I may actually install cumfy in order to create a "workflow" that is just boxes making "NIGGERS" to paste that in catboxes stealth
>>20068 just copy one of their text parameters and shit it up so it crashes
>>20067 couple of catboxes with hanging troonjaks won't force sainiggers here but yeah I agree that getting involved with that shithole too much is a bad idea
ill scrub my shit but im gonna still do it
>>20069 Are there any nodes capable of changing/moving/deleting files? May make for useful shenanigans >>20070 too bad the hanging troonjaks are too wide usually to render properly at least the last one I posted seem to render properly also got trips
>>20067 based boomer
>>20067 cute
>>20067 I've been enjoying this lora
So, is the NAI VAE fixable? Back when I used it I'd get NANs on both normal gens and hiresfix - with or without --no-half-vae. It was more common with hiresfix but it also depended on which upscaler I wanted to use, sometimes when Latent refused to work Anime6B would actually go through.
>>20077 If you have one that fails even with --no-half-vae I'd like to see that because it might work with bf16. Don't use the dev branch currently because there is an issue that makes it more likely (turned out to be some pytorch autocast issue)
>>20078 Whats the implication of getting a NaN error on a no-half-vae? How does that help with identifying a fix?
>>20067 this place has already gone to shit, gj whoever was retarded enough to link on /g/ and invite the current crowd. Ruined it
>>20080 are you lost?
>>20081 aren't you?
>>20080 This place has never been linked on /g/ The schizo shitting up /h/ that sometimes comes here found the link in gayshit
>>20082 Been here since #2.
(1.14 MB 1344x896 catbox_jwctr4.png)

I was just gonna ask how long others have been here for and then realized the old threads are gone. Gonna have to trust my memory but I think this is the same image I posted for the original hdg ban waiting room thread.
>>20084 newfag.
I’ll have you know I figured out the original riddle on my own before someone practically spoiled the Mario game and I bitched about it lmao
>>20035 catbox?
>use twatter once a year >need to DM someone >open twatter >find this nigger on my timeline (idk who he is) Do arthoes really? How can they claim they're starving when they dare charging so much for so little?
>>20089 it's just a tired theme at this point man. yeah faggots are ready to spend hundreds of dollars for a doodle just because it was made by their idolized artist. they pay for the name and feeling "honored". same reason why drawnigger threads still exist. they don't want to see an art of their waifu, they want attention from some cool discord kid.
Anyone got miyamoto issa lora? I feel like I've seen one somewhere but can no longer find it
>>20093 Very nice tan. Catbox pls?
comfyUI question: Let's say I wanted to do something like, e.g, changing an input to the sampler at every step. It's not what I'm doing, but let's say I wanted to change the cfg every step, oscillating between 1 and 5 like so: 1 3 5 3 1 3 5 3 1 3 5 3 1 etc until the gen finishes. At present the only way I know how to do this is to have as many advanced sampler nodes as there are steps. Since that's obviously not the right way, what is the right way?
>>20095 >he fell for the meme if you actually want to do advanced shit like this you're better off just using python directly
retard-tier question, does comfyUI by default deny requests from non-localhost? I'm trying to connect from another machine to the web interface and I can't, I've checked that my firewall is disabled. Also, is there a way to change the default port?
go back
What the fuck is happening today?
>>20085 I posted >>2 RIP
What happened to the board owner? Did he die?
>>20102 if he is then I'm just going to post furry again
>>20103 Please don't.
>>20104 you WILL look at (spoilered) furry holos when I decide to tune this shit lora again until I get it working (I may even try training it on vpred zSNR furry models and see how backwards compatible it is)
>>20015 Black blank gen is pretty much the NaN errored gen, right? I just got one with no-half-vae. I tried using different prompt for upscale pass and it failed.
>>20106 yes it is >not catbox anon, but the guy helping him yesterday
>>20107 Damn I got 7 loras used there lol. I feel like it blanked out because I used different hires fix prompts, it didn't happen before. I'll do some more though.
>>20108 >>20031 >2) if there are any that somehow still produce NaNs on fp32 I think he still wants that
>>20106 >>20109 Yeah, I wanted to see if a failed fp32 gen works in bf16. The fact it's using LoRAs and such makes it less useful but I'll take what I can get
>>20111 I haven't tried doing a no-half-vae attempt but I didn't get any luck generating an image without a NaN error when not using special syntax running it normally. Should I try doing a run on no half but with weights and BREAKs only and see what I get?
>>20111 catbox keeps shitting itself so i cant upload the loras now. sry Im going to sleep will post tomorrow hopefully
(1.71 MB 1024x1536 catbox_sg4b4j.png)

After I finish these images sets I need to crop I'll do a fp32 no-half-vae run without LoRAs and see what I can get
>>20112 >>20113 Do you mean with a NaN error? Either way no rush anon, rest well
>>20115 Word, cheers anon
>>20117 Yea, so without using LoRAs, weights, breaks or anything, I was never able to generate a non-hires fix NaN error/Black Box image on fp16 So for fp32, do you want me to use special syntax in the prompts without LoRAs just to see if I can get one to happen or do you have any special instructions?
>>20118 btw I was >>20011 >>20014 so I know what the game plan is, just not sure if you need anything different for fp32
>>20118 Without special syntax is important if I want to compare against Naifu. With special syntax (besides 3rd party extensions) is fine but will likely take more work to debug (if NaN w/ fp32 even happens at all).
>>20120 Ok, I will at least do a repeat of what I did yesterday but with fp32. I'll get a NaN Black Box gen with weight/BREAK and I'll do a generate forever for 15 minutes with none of the special syntax. I'll get started in 10 minutes and have something for you within 30 minutes after
>scan, (traditional media:1.3), (sketch:1.2), (extremely detailed CG unity 8k wallpaper:1.2) >BREAK >hakurei reimu >Negative: (worst quality, low quality:1.2) >15 minutes of generate forever produced nothing adding the Zankuro and Ponsuke LoRAs and letting it run for another 15 minutes
>>20105 hi seraziel anon i just found the momiji from june and i never thought i could nut that hard
>got banned again for posting how to filename filter the stupid demon fag on /g/ Next time I get unbanned I'm gonna add it in catboxes.
>>20124 Demon fag? Do I want to know?
>>20122 >another 15 minutes >got nothing again Not sure what to do differently to try and force an error, maybe see what the other guy did to prompt the error
>>20125 some mentally ill retard that should be trip fagging but refuses to, I made a list of all possible ways to filter him and he reported me (because everyone else hates him too so who else) and got me banned
>>20123 Thanks! reminds me I still haven't finished it, especially because last time I baked with picrel like a tard (hmmm I wonder why I had to lower batch size, must've been pulling (clueless)) I have to finish the Inori lora (retagging all eyes and testing shit since it's basically multi-concept), finish that shishi juuroku lora properly, and then rebake everything else lmao I'd like to retrain seraziel (and berseepon09) on zSNR vpred furry models but that means making furry tags too all of that while not being a neet anymore here's some gens from that lora btw, two recent with other style loras on top and one older, bare NAI
>>20124 >getting banned for that reason alone Thjs demon fag dude knows a mod, calling it now, you can't be banned for that one silly reason.
I have a VPN also that works on cuckchan but I'm not gonna burn it on him yet
>>20128 rip, godspeed also i meant the momiji that seraziel made kek, sorry i didn't know you posted one
Sorry Catbox anon, I was unable to force a NaN error on fp32 with all the things I tried that were not extensions.
>>20130 fuck, couldn't help myself, probably should've used my hotspot to post the filename filter
>>20131 ah fuck, I didn't check what he posted for a while makes sense
>>20133 /g/ is absolutely not worth bothering with, that shithole is gone long ago. Don't burn your vpns for that shit, the working ones are getting ry scarce on cuckchan. Although I did get banned myself from /h/ for shitposting when that comfynigger started shilling his filth.
what if catbox with 200mb of text in metadata
>>20135 I saw your deleted posts I'm still shocked that only your stuff was nuked and not the shills. I'm not too happy with how this is turning out >>20136 for what purposes?
>>20137 I probably went too hard with shitposting so it wasnt much of a surprise kek.
>>20137 >for what purposes? fucking with jeets
>>20139 a combination of resolution size and spicy metadata should do the trick then, have it take up the whole screen when they open the catbox link and have a bunch of hanging troonjaks in the metadata
>>20134 tons of extremely high quality stuff including polt, new holo x myuri and more importantly momiji
kek this is the hardest I've seen Cumrag get blown the fuck out in such a short timespan in a thread
>>20142 >from cumfy to cumrag LMFAO
I'm trying to understand something. Whenever I prompt for gym-related clothes (with gym in the negatives) there's always a chance it will give me broken barbells, a gym background, etc. Why? Is it a dataset + tagging issue on NAI's part? Is it CLIP still being CLIP despite """hacking in""" booru-style tags? Both? What do you even call this kind of "contamination"?
>>20144 It's probably a dataset issue but I can try testing something on my finetune, give me your prompt
Made a style lora for hunyanyac/gnsisir https://mega.nz/folder/dfBQDDBJ#3RLMrU3gZmO6uj167o-YZg Full size grid included I have yet to find a checkpoint that can do his flat style justice, every one I've tried just gets mangled and based65 really really wants to add soft shading There will probably be another version if I can't get the tsurime eyes he draws to come out with this one.
>>20146 that looks really good!
>>20145 Not home right now but it's literally just gym uniform/shirt/shorts with gym in the negs. I find that upping the weights for the negs or being too specific (eg barbell, weights, weightlifting, etc) is counterproductive. Either that or SD is just very spiteful.
>>20148 Yes no the very mention of gym in the tag and negging gym is gonna cause all sorts of problems. You will need to specify the type of clothing like “yoga pants” or something of the like
>>20150 Which is exactly why I want to know if this is a dataset issue, a CLIP issue or whatever the fuck. There are no other tags to refer to gym shorts, dolphin shorts aren't the same and the tag isn't as "strong". It's the spats vs bike shorts issue all over again. You can recreate gym shirts and gym uniforms but it's a waste of tokens imho.
>>20151 its probably leaning to a CLIP text encoder issue then
Oh Catbox anon, thought you'd might wanna know about this. I got a NaN'd error without any special syntax but on fp16. Now at first, I thought it was because of my finetune, but I decided to keep the prompt exactly as is and switch to nai-full and eventually I got a NaN error. I have it saved for when you get back on in case you need it.
>>20153 I'm here
>>20154 https://litter.catbox.moe/01r4n4.png (Euler a) https://litter.catbox.moe/n0rfk8.png (2M Karras) Thought I would include getting it in two different samplers
>>20153 i still have to figure out if catbox anon is the actual catbox owner or if he's the guy who made the catbox upload script
>/g/ got datamined for an AI detection application https://github.com/Urist-Mc-Urist/AI_detector lol >>20156 catbox upload script
>>20157 >/g/ faggot's hubris and pride will inevitably fuck us over when he ends up selling his shit to arthoes sites But can this faggot beat adverse cleaner?
>>20155 Okay, so I get NaNs in the webui, but now I realize the dtype env var for Naifu only applies to the diffusion model, not the VAE. They forced it to run in fp32. Changing it to fp16 is giving me a different error but I think I need to throw in some autocasting somewhere so there's still a bit more for me to look into. This means finding an image that produces NaNs with fp32 would be very much sought after though.
>>20159 >They forced it to run in fp32 Wait so they did DK64 the VAE?
>>20159 >This means finding an image that produces NaNs with fp32 would be very much sought after though. Tell me the prerequisites, I have a few hours and want to contribute for once. I don't have NAIFU anymore but I can go fetch it if needed. No special syntax, no loras, NAI VAE (duh), no weights, what else? Pre or post upscale?
>>20160 I need to figure out the current error I'm getting but if I do and if it throws NaNs I think it basically confirms they DK64'd it >>20161 pre upscale would be preferred but you could also just generate the image with an absurd resolution from the start and that should achieve the same thing
>>20162 The only alleged fp32 error we know was with that anon having LoRAs, im starting to think they did DK64 if Naifu has it force set to fp32. I've been running a generate forever for at 2.1secs a gen on a 4090 with the same prompt and I haven't gotten a black box yet in 250 images.
>>20162 >pre upscale would be preferred but you could also just generate the image with an absurd resolution from the start and that should achieve the same thing I lost my very first gens so I can't confirm if I was using NAI or not but I do remember getting black gens when trying to gen at 1024x back when I was testing my GPU's limits
>>20155 >>20163 So, I get the NaNs in Naifu too. They also intentionally casted the samples passed into the VAE to fp32 which was the reason for the runtime error I was getting. I swapped out the NAI VAE for one of SDs and it worked fine. Considering the rest of the pipeline in the VAE part just is taken directly from the original CompVis repo (like the webui used) I'd say it's pretty much confirmed at this point it requires fp32.
>>20165 Just to add to this I could go down the rabbit hole of seeing when exactly it throws the NaNs to see if it's consistent and attempt (a lot of emphasis on attempt) to modify the weights but my guess is it's not consistent and the VAE is just fucked
>>20165 >I'd say it's pretty much confirmed at this point it requires fp32. So it's unfuckable? What about the times when it blacks out even with --no-half-vae? Just a compatibility issue with LoRAs?
>>20165 Fucking rip, So how exactly are the VAEs made? Are they created from the finetune or are trained separately?
>>20167 My educated guess is the weights shifted by LoRAs are too much for the NAI VAE due to how unstable it is so it can't account for those changes >>20168 I believe separately. It's actually been around for years back when GANs were the thing. I also just now realized the kl-f8 anime VAE is finetuned off one of CompVis' VAEs, lol. https://github.com/CompVis/latent-diffusion#pretrained-autoencoding-models https://github.com/CompVis/stable-diffusion/tree/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/models/first_stage_models/kl-f8
>can’t check the furry diffusion discord on my iCuck while on the shitter because of AppStore policy on restricted servers >Don’t want to install DisCuck on my Graphene phone First World Problems are so fucking annoying
>>20169 If someone wants to train a VAE I think this is the place to start btw. It might not actually be that difficult. https://github.com/CompVis/latent-diffusion#training-autoencoder-models
>>20169 >>20171 Even when /hdg/ and /sdg/ were good... wait no, decent... wait no, fine... wait no.... even when they were not complete garbage - yeah that's better - NO ONE ever talked about or made a guide on how to make a VAE despite plenty of people asking, I wonder why
>>20172 Well one thing to note too is even Stability finetuned off the kl-f8 CompVis VAE: https://huggingface.co/stabilityai/sd-vae-ft-mse-original#decoder-finetuning Which originally was trained on this dataset: https://storage.googleapis.com/openimages/web/index.html I'm actually gonna look thru the NAI leak code again just to see if there's any indication how they trained/finetuned their VAE but I don't recall seeing anything last time
>>20173 You have both parts of the leak, right?
>>20174 yes but part 2 is just models, particularly the text ones and a bunch of the test SD ones, nothing really useful for our purposes
>>20169 >>20171 The reason I ask is because when I have a base model freshly trained on top of nai-full, I have a very vibrant yet sharp, and kind of harsh in some spots, color palate compared to NAI and wanted to see after I get the finetune to a more stable state, I wanted to train a VAE of the color palette (assuming the color improves the further I improve the training). >1st Image - old revision base finetune with NAI VAE >2nd Image - old revision base finetune with VAE Off >3rd Image - Current Revision Finetune + merged model with NAI VAE The old revision is pure ufotable screencaps where the current one has more stuff added in so I used the pure model one to demonstrate the colors but there is more visible training jank compare to the current one.
>>20176 oh and when I merge the fresh model with a checkpoint merge, it inherits the washed out color of not having a VAE on or leaving it on Automatic. Hence the lack of a 4th image with the VAE.
>>20162 >pre upscale would be preferred but you could also just generate the image with an absurd resolution from the start and that should achieve the same thing Started genning with a simple prompt at batch size 8 and I accidentally got a few bangers, no failed gen yet
>>20173 Couldn't find anything, there's a bunch of configs and commit history for the finetune but nothing specifically related to the VAE from what I can tell
>>20178 >>20162 Retarded question but the seed should be random, right? Also I should probably knock down the step count to 20
>>20180 I don't think it matters, i had mine on 30 steps, and yes do random seed, its not gonna spontaneously NaN a successful seed
>>20180 Yeah that's fine. I'd honestly say there isn't much worth in it now that I know the NaN issue happens in Naifu with fp16 but knock yourself out.
>>20179 I thought it was because I was a codelet but I felt that most of the stuff on that github leak part of the leak wasn't very useful for people wanting to reverse engineer the training process.
>>20182 >I know the NaN issue happens in Naifu with fp16 How did you get it to use fp16?
>>20184 It reads a DTYPE env var but that only applies to the diffusion model. You need to cast the VAE to half and the returned samples to half because they always expected the VAE to be in fp32. I don't know why the repo is on GitHub but it is, these are the two lines you'd need to change. https://github.com/Limitex/NAIFU/blob/1ca79a9e1a9d10e11af064c4a782f4ad208cd033/hydra_node/models.py#L231C70-L231C70 https://github.com/Limitex/NAIFU/blob/1ca79a9e1a9d10e11af064c4a782f4ad208cd033/hydra_node/models.py#L466C80-L466C80
>>20183 All the finetune code is there but to what extent that's useful I'm not sure. Penultimate CLIP and bucketing were the two major things and those were made public by them directly. I think a couple notable things they implemented were tag weights based on frequency (they literally comment it's to prevent reimu/miku from polluting the dataset) and resolution dropout (???)
>>20186 >they implemented were tag weights based on frequency That would be kind of useful for me since sometimes prompting 1girl blond hair gives me a saberface and if it gets annoying I have to waste neg tagging Saber. >resolution dropout the fuck is that? Infact, what is meant when referring to dropout, such as caption dropout?
I am going to do the forbidden and see if I can port some cumfy nodes to the webui https://github.com/city96/SD-Latent-Upscaler https://github.com/Extraltodeus/noise_latent_perlinpinpin
>>20188 >I am going to do the forbidden and see if I can port some cumfy nodes to the webui https://www.youtube.com/watch?v=tHXEXBsvTKg
>>20149 cute! catboxen?
finally starting hydrus what the fuck is that "darkmode"
(130.65 KB 512x768 catbox_pk7ign.jpg)

(148.09 KB 512x768 catbox_198wgn.jpg)

>>20151 if I prompt gym {uniform|shirt|shorts} and put inside gym, gym equipment, in the negs, then I don't get any broken barbells in any of the 16 gens I tried on based64v3
>>20194 what do you mean?
>>20196 Hadn't seen the style option. Also, is there a way to import files in a way to keep the folder structure I have? all of this is pretty overwhelming for someone who just wants to edit tags in a more streamlined manner
>>20197 While Hydrus does not keep your folder structure, you can group your imported image was special hydrus meta tags called group namespaces. Just highlight all your imported images and make a tag in the following format >group:name >ex: Series:'Name', Author:'name' Genre:'Type' Also, sorting by "Import Time" will at least respect the filename order of the images from the folder(s) you import.
>>20198 (Me) if it wasn't clear (which I don't think I did), you can make your own custom groupings so long as you follow that syntax so you could use that to group your stuff in folders that you had and make multiple subgroups if you need that level of autism orgganizing. You can pretty much do whatever you need.
>>20198 >>20199 Okay, thanks, I think I found that.
Just found that you can set "scheduler type" in A1111 master So if I set scheduler type, it seems to prioritize it over your sampler choice, so it will choose exponential even if you set SDE Karras as your sampler I think catboxanon said that exponential should be the best one for SDE, is that correct? And what is the "automatic" sampler for SDE?
yeah, exponential is best for dpm++ 2m sde. automatic uses whatever scheduler is the default for the current selection (most just use the vanilla scheduler)
>>20203 Meant for >>20201
(1.85 MB 2048x941 xyz_grid-0004-2979705596.png)

(2.08 MB 2048x941 xyz_grid-0006-294417315.png)

(2.08 MB 2048x941 xyz_grid-0005-18902956.png)

>>20203 I mostly used SDE Karras, not 2m sde Interesting how distinct karras looks
>>20197 You can use regex during import to save your whole folder structure as namespaced tags automatically. In the tagging dialog during import, switch to advanced under your desired tag service and in the quick namespaces section click add. There add your namespace name to the first field, ie. folder and add (?<=^D:\\Folder\\Folder2\\).*(?=\\.*$) to the second field, where the D:\\Folder\\Folder2\\ part is the part of the path you want to exclude for the tag, so if your folder structure is like D:\Folder\Folder2\Cute\Whatever, then the tag will turn out as folder:cute\whatever. If you then want to search every file in the cute folder, you just search "folder:cute*". The simple tab also has checkboxes where you can just save the first or last 3 folder names as namespaced tags. Also if you don't want your folder tags clogging up your tag display on the left in gallery view, you can go to tags > manage tag display and search, where you can turn off specific tags and namespaces when you click on the tag filter for multiple file views button. You can still use the tags to search and they still pop up in autocomplete.
>>20206 there we go, wish I knew that
>>20206 Thanks. Also, is there any good way to do grouped tag work? For now I'm stuck using archive/inbox and favorites to hack up something, but I can't even seem to unfavorite multiple images at once so it's kind of tiring
>>20208 Ctrl + Click multiple individual images or Shift + Click an entire row or section and you can do tag multiple images at once. Also, in Options => Thumbnails, make sure ctrl-click and ctrl-shift focus thumbnails in the preview window checkboxes are clicked so you can see what you currently click when grouping together.
How does one think that it's fucking reasonable to adverise ui name in filename I hate cumrag so fucking much
>>20208 Yeah, just select multiple files and press F3 to edit tags for those files. You can double click on an existing tag in the editor and it will give you an option to add or delete that tag from every file in the selection, or just add your own. To mass edit favorites, select your files, then right click > manage > ratings.
>>20210 At least you can just filename filter them. So long as Cumrag gets visible push back and mock him by telling him to buy ad space every time he posts, that's all you can really do. And then make sure you push back the one or two retards trying to shill on /h/. They haven't shown up today so its probably fine for now.
>>20206 Going back to this, is there a similar system for export? I can only set things for the filename, not for the folder structure
>>20213 nevermind found it, it just didn't want to type "\" for some reason
>>20213 I don't think there is. If you plan on exporting regularly to the same folders, you could probably set up an automatic export (you can set it up to be manual too) for each folder in file > import and export folders. >>20214 Wait, it actually works if you add \ to the filename?
>>20215 >Wait, it actually works if you add \ to the filename? Yep, it's even on the help page only thing is that it makes everything lowercase for some reason
>>20203 Which sampler/scheduler would you personally use in general? I'm doing bunch of xys now and it feels like 2M SDE is kinda bad, although exponential does seem to cause the least amount of fuckups with it when I tested with high cfg. SDE Karras and 2M Karras look better at least.
>>20216 Damn, I just learned something new today lol. >only thing is that it makes everything lowercase for some reason Also noticed that if you use a tag for the filename and it has "\" in it (like the folder tags with the whole folder structure), it will get converted to "_". That's kind of a shame.
>>20217 >SDE Karras and 2M Karras look better at least. This is what I use as well. Hard to recommend stuff when not all them are really comparable (some intentionally by design work better at fewer steps and/or lower/higher CFG). What I've found really damn good though is using the Restart sampler in the hires fix pass. It's mediocre for the initial sampling but for hires it's great. I've meant to make some comparisons to post here but haven't gotten to it.
2M Karas for standard gens SDE Karras for Img2Img Upscale Euler A for inpainting
>>20219 >Hard to recommend stuff when not all them are really comparable (some intentionally by design work better at fewer steps and/or lower/higher CFG) Yeah I understand, that's why I asked your personal preference. >>20220 I used SDE Karras(genning/upscaling) and Euler A(for some of the upscales and inpainting) 99% of the time but those xyzs make me reconsider 2M Karras. I wish there was a way to make XYZs with SDE at 2x less steps compared to other samplers
>>20201 You could do this for months in ComfyUI.
>>20222 Hello cumrag, why haven't you bought any ad space yet?
>>20222 You could tongue my anus for months.
>>20222 Why do you even come here?
>>20225 4cucks isn't sucking his dick anymore and it's street shitting hour for the SAI pajeets.
>>20226 Yea but, they know that only a small amount of people come here still and we all hate cumrag
>>20227 Stockholm syndrome then.
>>20226 >4cucks isn't sucking his dick anymore /g/? I thought place is a lost cause
>>20230 Yesterday he got blown the fuck up the moment he showed up https://boards.4channel.org/g/thread/95459181#p95459983 /sdg/ is still a lost cause because the avatarfags don't get necked so nothing useful gets talked there.
Im not posting gens on /g/ anymore now that people have begun scraping the threads. I normally wouldn't care but with the amount of trash getting attracted to that lightning rod of a place, I don't want to contribute to someone elses work.
>>20231 fucking kek, that's brutal. although there are still some of his simps there. Is that shit he posted actually any useful though? >>20232 I posted there once or twice a week but yeah it's trash. >now that people have begun scraping the threads dunno I always felt that it's a given, especially considering it's /g/. can't do shit about that
>>20233 >I always felt that it's a given I feel disgusted having my gens be in the same folder as cumrag and debo
>>20202 thanks! nice gens!
thanks to whoever delivered that follow up fake catbox lmao
>>20193 cute
>>20238 Well I kneel, good job.
I got about 50 image sets left to crop… The auto crop I used to use for some reason isn’t centering the face for the clean square cut anymore so I have been stuck doing this shit manually Fucking kill me
(2.32 MB 1024x1536 catbox_qxc4wk.png)

>>20206 does hydrus have a script for scraping metadata on import yet? >>20220 for me, it's euler a for the base gen and ++2m karras or ++3m sde exponential for hires fix. euler a does a better job of smoothing over the malign noise that often crops up during lora mixes.
(4.25 MB 1536x2304 catbox_qq16cg.png)

damn bitch it's just Barq's® Root Beer
Reading above on the posts I missed, I think some of you get way too angry about some things. This thread is like super slow still so you could ignore the couple posts that bother you instead of getting PTSD from /g/ .
>>20243 I only really get mad when they start showing up on /h/, I could careless if /g/ burns, the random dude that shows up here I have already been able to ignore him after a short reply questioning his efforts here.
(772.67 KB 640x800 00046-3090128527.png)

>>20242 MUG Moment™️
>>20245 Has anyone done loras of the cunny rock?
>>20245 interesting take on a root beer cream float
>>20248 Is it good? Any examples? Can't really whip up any at the moment myself
(853.39 KB 640x960 catbox_b28y9p.png)

(825.78 KB 640x960 catbox_r9gdxo.png)

(825.12 KB 640x960 catbox_nucenx.png)

(3.14 MB 1280x1920 catbox_kklkh8.png)

>>20249 Just some initial tests, it seems alright but they pruned tags for character details which is dumb
>>20250 Does it do nsfw well? I've had issues where some loras forget the character when you try to get them to do nsfw Thanks for testing it by the way
(2.35 MB 1280x1920 catbox_f8g8v4.png)

(2.20 MB 1280x1920 catbox_u5kooa.png)

(2.21 MB 1280x1920 catbox_7yr8ya.png)

(2.13 MB 1280x1920 catbox_nwc33t.png)

>>20250 Unfortunately because of the shitty tagging the clothes are stubborn to come off. (nude, topless:1.2) did the trick
>>20252 Yeah I've had that problem before with other loras. Thanks for testing it out.
>>20250 >>20252 I like flat loli bijou more....
(2.18 MB 1280x1920 catbox_74c370.png)

Rock hands, that's a new one
>>20255 why is her skin so shiny
(2.13 MB 1280x1600 catbox_67z3qr.png)

>>20256 just the style lora she a little confused but she got the spirit
>>20254 >tfw primarily a cowtits hagfag but sometimes go on an all night cunny binge like im a werewolf or something. y-yeah I like both I guess.
>>20259 I primarly like vanilla sweet stuff but sometimes I'll binge on the most degenerate shit prompting for hours or days then not consume any porn for weeks like my body needs to recover from how fucked up my last fapping session was
the only thing that terrifies me is the crunchy cereal bar
>>20241 >does hydrus have a script for scraping metadata on import yet? https://github.com/space-nuko/sd-webui-utilities/blob/master/import_to_hydrus.py I just edited out the parts that create notes with a filename and replacing " " with "_", because I didn't like those.
>>20260 lol yeah scrolling through my gen folder shows exactly this play out repeatedly
>>19867 New try. In pink on tensorboard. Seems to have gone a bit better (a tad less melting, but still a little), no more orange eyes thanks to retagging. However pruning the outfit tags didn't work out well. May also reduce the weights on the anime side of dataset since it bleeds a bit of the style. Full grids: https://files.catbox.moe/hjav0n.png https://files.catbox.moe/fiomje.png
Did the A1111 webui favicon stop loading for anyone else? My tab feels naked without the shitty little gradio icon.
>>20265 Gradio broke it a long ass time ago and it was only fixed in 3.33.0. https://github.com/gradio-app/gradio/pull/4369 Latest stable webui versions uses 3.32.0. Upcoming release is probably going to be using 3.41.0, whenever that comes out.
>>20266 Thanks anon
jeets are going really fucking hard lately.
>>20268 luckily you're here to complain about it
I need a better system for keep tags, khoya has only has a meager global setting, and easy scripts manages to do it by folder, but still no dynamic one. Like telling it you want "X,Y,Z" tags to always be kept if they're here Currently on 1, if I move to 2 it'll fuck with images that only have 1 main tag
(761.19 KB 1704x2560 catbox_y4e315.jpg)

For the anon who was using multidiffusion upscaling and losing details sometimes, I've gotten a pretty good result with the following: - tile controlnet as usual - reference_only controlnet - adding this made the biggest difference to preserving style & detail - nearest upscaler instead of ultrasharp - avoids the smooshing effect of ERSGAN upscalers killing some texture details - using 0.6 denoise because of nearest upscaler to avoid blocky artifacts - using noise inversion because of 0.6 denoise so that it preserves the original https://imgsli.com/MTk5NjM0 It added too much texture to the background, but I don't mind fixing that in PS.
>>20271 >- nearest upscaler instead of ultrasharp - avoids the smooshing effect of ERSGAN upscalers killing some texture details I honestly still haven't tried multidiffusion but is nearest really better than something like remacri for example? I dislike ultrasharp myself but remacri doesn't seem to kill the details. 2M or SDE karras also should be better at keeping details compared to euler a, at least that was my experience with it when upscaling. maybe I'm wrong though, I'll try multidiffusion later myself
>>20272 A long while ago I tested remacri vs ultrasharp and consistently preferred ultrasharp, so gave up on remacri. But I'll give it another go for this kind of use case. I was going to say "Multidiffusion upscaling suck with any sampler other than Euler or Euler A." But I did a grid to test now with remacri and it actually looks pretty good with everything other than DDIM. Maybe remacri just works better? I'll have a look at some of my older upscales to see if they get messed up with 2M or SDE. I recall 2M Karras giving "too many" details when I first tried it. Yet another knob to tune for future gens.
Finally fucking finished cleaning up the heavens feel images I had slotted for the month, still got some image composition tag fixing that I have to do but if I finished all that this week, I will be focusing on just trying to make improvements to my finetune settings and getting a new model revision out.
Wish I could pay anonymously for one of these colab sites/alternatives I want to gen shit online but I seriously do NOT want any of these sites to have my name, my billing info or my cellphone
>>20275 Why do you wanna gen shit online instead of locally on your computer?
>>20276 Because my computer is a mid-tier laptop from 2019 with a 1650?
>>20277 Oh fair. For some reason the way I read it I thought you mean to gen on the go or something.
last time I checked a furry checkpoint the quality was shit. where is the new furry science happening?
>>20279 Not sure on the specifics honestly, just that there are two allegedly good finetunes that others have mixed with porn models to improve genitalia or some shit
>>20279 https://huggingface.co/lodestones/furryrock-model-safetensors/tree/main/fluffyrock-1088-megares-terminal-snr-vpred You need this extension plus adding the yaml to your model folder https://github.com/Seshelle/CFG_Rescale_webui Even just as a base model it's pretty usable but it's a furry model and prompting for humans doesn't fix that. If we had a danbooru equivalent I think it would easily beat out NAI. I haven't tried any fine tunes but it would be interesting to see how the different tagging affects how you prompt but it would suck if using e621 tags just made things more furry
>>20281 >said a stupid remark on /h/ about finetune that I probably shouldn’t have >get exposed >people now talking about me Do I have some fucking fan club or what?
>>20281 Didnt mean to tag you in the post above, but wanted to know if the hugging space has anything regarding training parameters
>>20282 >Do I have some fucking fan club or what? Welcome to the club I have a small but dedicated group of schizos that often accuse other anons of being me or sperg out about me while I watch from the sidelines
>post catbox for anon and a gen >leave my home and come back hours later, check 4cuck /hdg/ and that sperg replied to me again I guess this dude's obsessed with me for some reason.
>>20285 Is he just asking for catboxes or is this the fucking that trolls /hdg/ with cumrag shill
>>20286 >trolls /hdg/ with cumrag shill dunno he keeps telling me I need medication for some reason
>>20287 did you take your meds though
>>20288 yes, my protein shake.
>>20288 Idk about that anon but I just took my favorite medicine
>>20290 That sounds like a good idea right about now
>>20291 It's the GOOD stuff
>>20292 I just have ye ol reliable Johnnie Walker in the cupboard, though its still a bit too early for that right now
>>20293 my motto is 6 pm was yesterday
>>20280 >>20281 thanks. curious to check it out
>>20273 How many steps/cfg was this? DDIM got fucking destroyed lol
>>20264 Another bake. Green is last on tensorboard. It's better without the pruning. Orange eyes are kinda back for some reason, but now that it's tagged I can probably just put it in negs. Anime style doesn't bleed as much, but it still struggles for the WhiteVoidDress (but not for the version with cape lmao). I'll try running it for longer with a bit more restarts (for now I had 1 restart per epoch, I'll bring that down a little). I'll also probably switch to khoya_ss (I'm on bmaltais) so I can properly use the toml files for dataset, in order to use the mean number of keep tokens for each folder. I'll finally bump up the repeats on voidthreads and monster, same with whitevoiddress just in case. Grids : https://files.catbox.moe/5ugprh.png https://files.catbox.moe/tnorh5.png
>>20297 I've also been running with --zero_terminal_snr this whole time, not exactly sure if it's even applied
If I can't revive the game then I will revive the art style.
>>20299 I dont know what this is
>>20300 The most fun predatory f2p gook arena "sports" tps of my childhood.
>>20301 I still dont know
>>20301 >>20303 >2007 >childhood I was in highschool lol
(1.79 MB 1280x1920 catbox_mgpg19.png)

This brat...
(1.87 MB 1280x1920 catbox_i8tv5n.png)

Sometimes I wonder if the AI ignores my negatives once in a while just to fuck with me
>>20307 You're not alone People think I'm a schizo when I tell them SD starts to ignore tokens altogether even after restarting
(1.55 MB 1280x1600 catbox_j1l58k.png)

(1.47 MB 1280x1600 catbox_26pkz6.png)

(1.59 MB 1280x1600 catbox_7j1smf.png)

(1.65 MB 1280x1600 catbox_oahe3j.png)

>>20271 How do I get a good upscale without adding a third nipple or fucked up fingers
>>20308 I have noticed that sometimes introducing a negative tag will add it in anyway
>>20310 It's a 50/50 for me, it's bizarre
>Today I will be productive and do things!! >But I'm feeling kinda down so I'll prompt a bit to feel better >End up prompting the entire day and got literally nothing done this sucks AI is a terrible influence on me
>forgot I had another batch of images I needed to crop >proceed to drink instead and smoke instead Fuck
I wish there was an easier way to control where the character is looking instead of having them always face the "viewer" or the camera
>>20314 Same, I find that the "looking" tags other than maybe up/down (and at viewer of course) basically don't do shit, especially looking to the side/away. This is probably where the controlnet rig is useful but I feel like that and controlnet in general kinda suck the fun out of pure t2i
>>20314 >>20315 (me) I feel like this is yet another issue caused by NAI's dataset and most likely sloppy tagging. The more I learn about Anlatan/NAI the more I view them as outright incompetent. Then again, even pre-leak I didn't have much faith in a company founded by covert r*dditors on 4cucks then moved back to r*ddit.
>>20316 To be fair, booru tags are just not that descriptive when it comes to directional movement
>>20317 That's assuming stuff is tagged correctly to begin with and that's rarely the case when it comes to anime boorus. Furry boorus (so mainly just e621) are MUCH better when it comes to tagging (but it's nowhere near as godly as people want you to believe lol, trust me) Tags like "looking away" and especially "looking to the side" are functionally identical to "looking at viewer" right now with very few exceptions.
>>20318 Yea those two are fucking aids Also >there is looking up and looking down tags >but no looking left or looking right Im gonna fix that in my model
>>20319 >>but no looking left or looking right You could replace those with away/side... but alas...
>looking away danbooru page >”This tag is deprecated and can't be added to new posts.” Uwotm8???
>>20321 What the fuck?
>>20321 >all those times I negative prompted "looking away" and "facing away" didnt do shit oh god
>>20323 This is clearly a Danbooru issue but what the fuck lmao
>>20323 i can't believe novelai did this
>>20323 Oh my fucking god. Too ambiguous? I get it but that's so fucking retarded for multiple reasons. Broader tags help while searching and you could always use tag implications. Not only that but the old posts are never gonna get updated anyway. >facing to the side SPOT THE FUCKING ISSUE
>>facing to the side >"A character facing to their side with closed or covered eyes." Was someone fucking high when they wrote this? >"The character should be facing to the "side" relative to their body, that is, turning their head without turning their body. If the entire body is facing to the "side" relative to the camera, use from side instead." >examples are not exactly accurate across the board urgh
>>20327 Does Stable Diffusion in general even have the capability to understand tag implications?
>>20329 By proxy, yes. Same way it'd work on a booru. Or... when pruning captions.
excuse me for mentioning this, but is there any way to try cumragui prompt weighting on a1111? I was wondering how it works and if there is any situational benefit to it but I really don't want to bother with noodles
>>20220 What makes euler a better for inpainting compared to just using the same sampler? I use 2m karras for everything.
>>20332 Nta but in my experience it requires less denoise to make perceivable changes, it looks kinda soft and blends well with rest of the picture and it produces pretty clean eyes when you inpaint faces. It's bad for inpainting hands
I fell off in march, I'm just getting back into AI, are the based mixes still the king for anime girls? also easyscripts anon if you still post here your a fucking saint. I'm really impressed with how far you've taken it since I last used it
>>20334 based64v3 remains the champ, yeah
>>20335 embarrassing
>>20335 But counterslop-v3 makes the best waifus.
>>20336 only 857 million parameters prease undastandu
(2.58 MB 1280x1920 catbox_g47gv5.png)

>>20307 A1111 is personally fucking with you
>>20309 I was going to say you have to enable the tile controlnet, but the metadata says you do have it. You have put the CN ending step to be less than 1 though, keep tile controlnet on all the time. I also don't see "Tiled Diffusion" in your metadata, I think that is actually necessary, because it uses tiles of a size close to what SD was trained on, so it doesn't cause artifacting (when combined with tile CN). One last thing: try Euler A, I know it seems like it'll suck, but give it a shot. With Tile Diffusion, the "latent tile" size can be multiplied by 8 to get pixel size. So if you wanted 768px tiles, then use 96. That's what I prefer, although 128 (1024px) is more efficient (faster) at low denoise.
>>20341 Thanks for the tips, I've basically been using controlnet tile to guide the first steps up upscaling to minimize the weirdness of latent upscaling which is why I had the ending step less than 1. I found that "just resize (latent)" doesn't error out from low vram while "just resize" does, so I had just been doing that instead of proper tiling.
>>20316 We would be eating out of a dumpster if it was not for the NAI leak. >>20332 picrel is cropped details from an earlier image with a variety of samplers. Euler A just looks better, I don't know why. The skin starts looking like sandpaper with odd wrinkles and bumps and there is unpleasant texture getting added to the plain color of the sheet. My uneducated guess is that the other solvers are "better", "more accurate" and end up trying too hard, hallucinating a bunch of detail that shouldn't be there. Euler A's lack of competence is its strength when it comes to upscaling. >>20342 With hands, I try fix them with gimp + inpainting on the low res image before upscaling. It's also worth doing an inpaint pass on the face before upscaling, gives a higher quality result usually.
>>20343 >do an inpaint pass on face before upscaling I always had a shit result when doing this, but I assume you do another pass later after hires/upscale?
>finally unbanned from cuckchan >don't even feel like posting on /g/ anymore despite having ban evaded the entire time weird but now I can do more useful things
(5.09 KB 512x512 catbox_9kcis8.jpg)

(137.17 KB 512x512 catbox_rnpyd8.jpg)

(136.83 KB 512x512 catbox_ufgk2k.jpg)

>>20281 I'm plenty fucked up but thank god I'm not a furry. My gens look fried, doing something wrong. The furry discord gens look high quality, but not better than these threads. CFG rescale and vpred actually fixes darkness, picrel.
>>20346 use cfg rescale at 0.7 with trailing enabled do you have the yaml next to the model, with the same name?
(158.72 KB 640x960 10341-4095665486 orig.jpg)

(157.95 KB 640x960 10347-3057293312 orig fixed.jpg)

(76.52 KB 512x768 10267-3103481175.jpg)

(76.10 KB 512x768 10267-423089162.jpg)

>>20344 No extra pass on the face after upscale. Usually try tweaking the settings and rerunning the upscale, then combine my favorite two with gimp. first pic is original gen, second is face inpaint + arm inpaint
>>20345 Yeah same I think some of their banlist got fucked when the site went down last time, I was gonna post my stuff on /b/ but fuck that board in general.
>>20349 yeah it's a hellhole, it's either spammed by one of the local schizo avatarfag, or the baranigger kills the thread instantly (mods don't do anything and when questioned on IRC say "they didn't know" lmao)
>>20350 Yea I'm just gonna go back to my not so tinfoil conspiracy that the mods are looking for an excuse to kill all the /ai/ generals and letting all the schizos run amok is one way of going about it, cuckchan won't have an /ai/ board at this rate.
>>20345 /g/ is unironically tolerable at euro hours
>>20347 Got this in models: fluffyrock-576-704-832-960-1088-lion-low-lr-e125-terminal-snr-vpred-e98.safetensors fluffyrock-576-704-832-960-1088-lion-low-lr-e125-terminal-snr-vpred-e98.yaml Maybe I chose wrong model. Using 0.7 CFG rescale, but the trailing checkbox is only for DDIM?
(589.62 KB 640x800 catbox_0hq2kt.png)

(1.76 MB 1280x1600 catbox_1uz6x9.png)

(1.69 MB 1280x1600 catbox_o8l5ji.png)

(1.87 MB 1280x1600 catbox_qu7r7f.png)

>>20346 Dunno, seems fine for me. It's fun trying artist tags
>>20346 what am I even looking at?
(2.67 MB 1920x1536 00035-703754923.png)

I consistently get black box failed inpaint with NAI vae right now. Seed not fixed. Started happening after I disabled controlnet and now works fine after I enabled controlnet back. Pretty weird. I had a similar problem when using adetailer with controlnet, it started black boxing face inpaints like that. Yeah I know nai vae isn't recommended for inpainting but still might be an interesting oddity. --no-half-vae --disable-nan-check are enabled
>>20353 >but the trailing checkbox is only for DDIM no, it's just a way of correcting an off-by-one error on the noise scheduling. It doesn't have much impact, most of it is from the cfg rescale itself when the model is loaded, check in cmd if it says "running in v-prediction" or something similar
>>20355 It's just showing that the vpred model can generate pure blackness, while based64 can't. SD was trained wrong and tries to always have a medium average brightness across the generated image. That's why you can't generate night scenes without it putting bright lights somewhere.
>>20356 I get problems with NAI.vae when img2img and inpainting as well but I dont have fp32 enabled Whats a good vae for inpainting?
>>20358 is fluffyrock custom base model or just the finetune model?
>>20357 Ok, will tick box thanks. I got "Running in v-prediction mode" Creating model from config: models\Stable-diffusion\fluffyrock-576-704-832-960-1088-lion-low-lr-e125-terminal-snr-vpred-e98.yaml LatentDiffusion: Running in v-prediction mode DiffusionWrapper has 859.52 M params. Loading VAE weights specified in settings: models\VAE\vae-ft-mse-840000-ema-pruned.ckpt >>20354 Ok, I can repro those. Looking at some gens on civitai it seems that including artist names is the way to get decent gens. Still doesn't make me feel sad to use Based64v3 as my main model. The furry models have gotten a lot better than when I first looked at them though.
>>20356 This image will fool /a/ 1000%
>>20356 It's especially bad for inpainting since it's the one that causes desaturation on each pass. >>20359 Use one of the SD or kl-f8 ones
>>20349 i miss the spirit of the old /b/ threads >>20362 nah, a human artist would never mix up madoka's outfits or glaze over her soul gem like that
>>20359 >but I dont have fp32 enabled Well it black boxed for me even on simple txt2img when I didn't have no-half-vae enabled. >>20362 Awful hand holding would give it away for sure but I was trying to fix it. Some errors with Madoka outfit too. Was mostly testing regional prompter latent mode and it seems to be working better than I expected >>20363 >causes desaturation on each pass Do you mean for the inpainted area?
>>20360 Fluffyrock is a base model, it's not a finetune of anything. Fluffusion is one too. but there are a bajillion versions of fluffyrock so it's a bit hard to navigate >>20361 seems good don't forget to use furry artist tags for quality (or the lora someone is making, if it released), and use e621 tagging vpred tagging is quite different, mostly in redundancy and weights
>>20364 at first glance they sure will, then the artcel seethe will commence >>20365 >fix image >wait for another Rebellion sequel update so it gets talked about on /a/ >pass the piece off as key art in the OP for a thread discussion
>>20366 Ok, prompting was the issue (and maybe the checkbox a little lol). Getting some decent gens. Dialing the furriness down, can get some cute anthro characters with cat ears, tail and colored skin.
Can we merge the fluffyrock model or is the vpred gonna be an issue?
>>20369 you can using the non-vpred checkpoint: futanari factor was done that way using the vpred checkpoint: this abomination was made : https://civitai.com/models/124655 (fuck there's a v3.0 gotta test it out)
could Based64 get some vpred to make night generations possible without lowra hackery?
>>20371 We'd have to a model that has vpred into the based mix
is vpred only a 2.0 thing or can you finetune a model with vpred on?
>>20371 >>20372 using supermerger's trainDifference, you could, see the second model in >>20370 >>20373 I don't think vpred requires architecture changes, so you probably could finetune a model using v-loss and zero terminal SNR
>>20374 If I could get some documentation or settings that I need to fuck with in Kohya (I’m still using the powershell scripts not the gui) I can try training a vpred version of my finetune to use in mixing up a variant of based64 from scratch this week
>>20362 why does /a/ get so triggered with any ai?
>>20376 its an /ic/ satellite with artcels and drawnigger
>>20343 >We would be eating out of a dumpster if it was not for the NAI leak. Haha yeah, for sure! Lmao no, fuck off. Maybe the lack of a base would've helped WD get farther than absolutely buttfuck nowhere due to infighting.
>>20378 Are you lost? This a new troll angle?
>/hdg/ is getting trolled now yup, I thought this guy was fishy, hope he likes getting mass reported
>>20381 Tongue my anus. I'm thankful for the leak, I just hate how sloppy base NAI is and how many issues could've been avoided had they actually put some effort into it instead. It makes you wonder what could've been if WD was "forced" to at least be not shit out of necessity.
>>20382 You are only saying that in current moment because you looked behind the curtain, and you should be glad they did more than Stability had RunwayML do with Laion5B. The other problem with your line of thinking is that WD had access to the leak, they had devs that could've read up what NAI did to make their own improvements, and they didn't capitalize on it. It's kind of pathetic when ControlnetDev knows more about NAI's training process than the WD devs, one who according to what people have said now works for Midjourney.
>>20383 >You are only saying that in current moment because you looked behind the curtain, and you should be glad they did more than Stability had RunwayML do with Laion5B. Like I've said, tongue my anus. Go shill for your not-so-covert-anymore-r*ddit company elsewhere, they couldn't even get their VAE to work properly. >WD had access to the leak WD existed before the leak.
you are a troll, i'm ignoring you, go back and shit up /g/ or something since you have resorted to dodging the central point
(3.29 MB 1280x1920 catbox_w6qsq5.png)

I don't know why the fuck I can't post to vtai so this is going here
>>20386 > I can't post to vtai what do you mean? I posted there not long ago.
>>20387 I get the post successful message then nothing. Or I get redirected to https://boards.4channel.org/vt/thread/0 after a few seconds. I tried on other browsers and other devices on my network but same thing. I would try on data but I'm banned for ban evasion apparently
>>20388 I noticed earlier some of my posts on /h/ got eaten up and disappeared into the void. Must be some kind of cuckchan issue.
(705.78 KB 768x1152 catbox_73uffp.png)

test
spent an hour and a half trying to figure out why multidiffusion was taking eighty minutes to run a single midsize picture and spitting out garbage forgot i left the sd upscale script on
>>20375 I don't know shit but the other model anon linked had this in the description if you want to try merge b64 with the fluffyrock vpred model and do something similar: >This version uses polyfur-lion-e76-terminal-snr-vpred-e25.safetensors and fluffyrock-576-704-832-960-1088-lion-low-lr-e126-terminal-snr-vpred-e99 mixed with 526Mix to provide the model with even better knowledge and aesthetics. Polyfur is special, as it attempts to include both high quality photographs autocaptioned by MiniGPT4 in its dataset alongside furry images, so that you get the best of both worlds. The model is early in its lifespan, but contains enough specialised knowledge that it significantly improves the quality of this mix. >Unofficial merge of fluffyrock-576-704-832-960-1088-lion-low-lr-e126-terminal-snr-vpred-e99 and https://civitai.com/models/15022/526mix-v14 using train difference with the supermerger extension in automatic1111. >Train difference is explained here https://github.com/hako-mikan/sd-webui-supermerger/blob/main/calcmode_en.md#train, and is what allows this model to exist. TLDR, this model uses the a1111 supermerger extension to merge 526Mix with FluffyRock vpred using these settings: 526Mix is set as model A, fluffyrock vpred as model B, v1-5-pruned (sd base model) as model C, add difference as the merge mode, train difference as the calculation mode, alpha is set to 1, and the output is saved as fp16 safetensors. Is the mix for Based64-v3 a secret? Can't remember if it was mentioned in a previous thread.
>>20391 It hangs for me sometimes even in the best case. I think multidiffusion has a memory leak. Either that or ControlNet. Because sometimes it gets slower and slower then eventually sometimes CUDAs but if I kill webui and start again, and send the exact same shit through, it generates fine.
>>20392 IIRC, Based64 is supposed to be AOM2_Hard + (HLL3.1 - NAI_Full) AD@1, then previous output + Defmix Red WS@0.5 The later ones the author said he was practically doing schizo merge experiments with MBW so I don't think he ever disclosed it if he could remember, we'd have to check old threads
>>20394 Thanks, saving that recipe. Sucks that 8chan has no search ffs. Based64 (esp V3) turned out so damn good considering it didn't even do any block weight merge BS. I guess the exact mix doesn't matter though. Sounds like you take Based64 as model A, fluffyrock as model B, v1.5 as model C and then train difference with the supermerger extension. I might try that when I get some time, but I think my gpu won't be strong enough.
>>20395 >considering it didn't even do any block weight merge the sum of the parts yea, but AOM2 was block merged so there is that Wished he was still here, he kind of disappeared along with everyone else
>>20396 >Wished he was still here felt like he got disappointed that his newer merges flopped honestly though, I don't get the praise b64 gets from /h/. it's good but there were other reasonable aom2 merges out there as well that weren't overfit to hell. and the overfit pretty by default models got their uses as well unless you really enjoy stacking bunch of loras for your every gen
>>20397 Those lightning in a bottle models like Based64 and Anyv3 always seem to outshine anything else ever made by their creators after
>>20398 yeah that seems like it sucks good thing it'll never happen to me haha
I was on and off this thing for some time but what actually lead to the consensus that based64 is undoubtedly best one for hentai and there are no better models? Its lora compatibility and good nsfw? Has anyone actually done xyz plots and compare this shit?
where's aom4
>>20400 >Has anyone actually done xyz plots and compare this shit? nope, we're all counting on you
>>20400 it just so happened to have very good lora and embed compatibility that has yet to be replicated since >>20401 Never, because like the examples above, AOM3 fell flat on its face compared to AOM2
>>20403 >it just so happened to have very good lora and embed compatibility that has yet to be replicated since wouldn't NAI and aom2 have better lora compatibility technically? I don't think it's the main argument >>20402 yeah I expected that answer but I simply wondered how people came to this conclusion
>>20404 just a lot of gening tbdesu, i don't really bother with 1:1 testing anymore because it's a pain to do it at a meaningfully large volume
How do I set the aspect ratio buckets for 768x768 training resolution or is it done automatically when I set the training resolution?
Hydrus chads, is there a way I can filter a namespace tag to become a regular tag? Example would be turn "artist:Greg Rutkowski" "artist:Hirohiko Araki" to just "Greg Rutkowski" "Hirohiko Araki"?
>>20407 and to be specific, I mean when doing an export so I can train
>>20407 honestly it's the kind of thing I don't even bother learning and just use a python script for but maybe you can just filter for that tag, select all, put in the new tag, remove the old tag, and voila
>>20407 You can process your tags when exporting. >click on the "no string processing" button in pic rel >click add > string converter >click add >select regex substitution >regex pattern: "^artist:" (without quotes) >regex replacement: nothing This will convert all artist namespaces to regular tags. Alternatively, you could use notepad++ and do a multifile find/replace after export.
Omega good job
(2.71 MB 1280x1920 00027-1677626075.png)

>>20411 Fucking image failed to upload
Ez scripts anon when you catch this. >update sdxl branch >worked before, doesn't work after update >main.py line 13 / 30 >config = json.load (f) >json_init.py line 293 / 346 >decoder.py line 337 / 355 >json.decoder.JSONDecodError: Expecting value: 1line 1 column 1 (char 0) 2nd part of the monologue is I loaded up the old version that you abandoned, worked fine, and it seems to be way faster in using the same settings for the LoCON's I was making in the new version, like 2-3x faster. I don't know if it's not using the alog right since I haven't tested this batch yet but was something I immediately noticed.
>>20410 >>select regex substitution >regex pattern: "^artist:" (without quotes) >regex replacement: nothing This is what I was looking for! Thanks!
>someone made an /sdg/ in /ic/ fucking kek
>>20415 gigabased dramafarmers
>>20416 it looks like it was cumrags that did it because they all flooded in and the thread got deleted
>>20398 >>20397 To be fair, it's better to have a lightning in a bottle hit than not have it at all. Companies and individuals spend decades chasing for that one viral hit or their lightning in a bottle success but never find it so it's better to have that achievement under your hat and be proud of it over having 10 flops and 0 successes.
>>20418 I'm not disagreeing, but some people can't handle only having that one hit and not be able to replicate another success after and just feel like it was just a fluke
>>20419 I call that the Ken Levine syndrome.
>>20415 That's hilarious goddamn
>>20420 To clarify, it's System Shock 2 vs the bi*shocks.
>>20422 >strike lightning in a bottle >twice >ruin it by making Bioshock Infinite a fate worse than death
Random ufotable schizophrenic lighting no new model yet, just doing tagging clean up by doing random gens to see what tags I need to fix
>>20423 >strike lighting in a bottle >ruin it by making 3 bioshocks ftfy
>>20425 True but Infinite was hated by even the normies that got sucked in by the original. That game killed his studio.
>>20426 Was it actually hated? I swear you can't look up anything about that game without running into porn or porn memes SFM was a such a huge fucking mistake.
>>20427 People don't like Overwatch but like Overwatch porn. That steam release fiasco made it very clear to everyone if you lived under a rock. >SFM Its the people that it attracted that was a mistake
>>20428 >That steam release fiasco made it very clear to everyone if you lived under a rock. I was in high school and heard my classmates talk about the game a few times but I didn't like 1 and 2 so I never even bothered to look it up or look up the drama surrounding it. I was too busy playing DotA all day.
I didn't prompt this
>>20429 >Dota over OW based
>>20431 I eventually bought OW because the same classmates promised they'd play it with me while I was stuck in the hospital for a good year and half and thought "whatever it's a shit game but at least I won't be lonely" I booted the game up a total of maybe three times and I played with them about... never. Great friends, huh?
>>20432 Feels bad man, had the same shit happen to me with Destiny 2 at launch. Got bullied into getting it after I quit D1 for being nogame the game, got met with the same nogame and my friends didn't want to raid or do try hard PvP so I quit again not long after.
>>20433 I almost bought Destiny 2 but didn't because I had nobody to play it I got gifted XIV by a friend because I really wanted a new MMO to play but I don't stay subbed because I don't have anyone to play it with and ERP isn't my thing
>>20433 I have 50 hours in D2 and I remember very little of what happened during all those hours. Gonna try it again as soon as I build a new rig. It's basically the borderlands-esque game I crave from what little I can remember but my 3570K hated it.
Civitai is cursed There really should be an anime tag so im not flashbanged with 3DPD gay bara sex or furry shit In desperate need of eye bleach right now
>>20436 I think your problem was visiting that shitty place to begin with
>>20436 There are some decent loras on there but its a fucking needle in a haystack deal, but yeah holy shit scroll down to get eyes assulted by furry anuses and gay dicks is never fun, I just blocked most of the gay furry bara tags and creators.
>>20438 Is it possible to share blacklists? Can you share yours?
(16.49 KB 493x625 MyBlacklist.png)

>>20439 I don't see an option here (unless an anon can tell me where I can find one) either way here is my list, those blocked users primarily share gay and furry works, any straight 3DPD stuff isn't blocked (mainly cause I don't give a shit).
>>20440 I never had this problem, but it could be because I search only checkpoints
>>20436 The worst curse is getting a lora of a style/chara you like and looks decent in previews but realizing that its fried unusable garbage. I learned to mentally block furryshit a long time ago so I don't even see it kek.
>>20442 >>20442 This is why I never check that place for anything unless its the only place to procure something like a model or some shit. Speaking of which, what are some tried and true artist and character LoRAs that you've ever used and/or made by anons?
Worst creator that regularly gets 5 stars on Civitai? I'll start: Malcolmrey. Just awful. I've tried several of his LORAs/Lycos and they're all just absolute dogshit, that only work in the narrowest of prompts and checkpoints.
>>20444 >I've tried several of his LORAs/Lycos and they're all just absolute dogshit, that only work in the narrowest of prompts and checkpoints I thought that's how all non-NAI trained loras worked lol.
>>20445 I've trained my own on models that aren't NAI and they come out fine. I've also found some decent stuff on Civitai, but it's not super common. I'm too lazy to make my own character LORAs. I just wanted to see Megyn Kelly getting her back blown out, but w/e
Is anyone taking style requests? I was thinking Waterkuma might be a good one.
>>20446 Every realism lora I've tried was absolute garbage that worked exactly how you described before. Even the fried 0.6 shit for anime was much easier to use compared to that garbage. Even considering all its faults that get mentioned itt, NAI was a blessing imo.
>>20435 You're not missing much, anon. It was OK at launch, but like all the battle pass games, it's now garbage revolving around FOMO and with mechanics and reward systems that dictate you treat it like a second job. Just watch Morbius instead.
>>20443 >Speaking of which, what are some tried and true artist and character LoRAs that you've ever used and/or made by anons? Original CSRB, new takorin from here, fizrot also from here, seraziel from here I wish the guy who made these would come back and retrain the pre-LoCON ones, they're alright but they could definitely be better. https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/D1hGQCZI
>>20449 >it's now garbage revolving around FOMO I'm only weak to Arknights' FOMO so I'm good
>>20394 >Defmix Red Seems like the author got a little meltie few months ago. Anyone got the model saved? Can't seem to find it.
kohya uses comfy now, he even made a node for his latest thing https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI good times ahead
>>20452 I downloaded all of them, I’ll reup it
This took some doing but it was worth it. All my character loras updated https://rentry.org/zp7g6 >>20447 I'll see what I can do
>>20452 >>20454 https://pixeldrain.com/u/b1tp6mnr This is only Def Red, but I have the rest that I can try uploading another time
>>20441 Any good checkpoints?
>>20457 nope, I only end up downloading whatever /g/ and /hdg/ say they use if the only option is to download from civitai
>>20458 Damn I downloaded some loli japanese model that I like so i'm on the lookout for other models all the time
>>20453 More like comfy times ahead amirite Cumfy fucking sucks I hated using it and the only reason I'd ever use it is if A111 is not an option at all
>found a 'colab for complete retards' guide >Click it >Has more options, buttons and text than the lastben and nocrypt collabs this is very funny and amusing
Is there a guide on composable LORA or mixing LORAs together if I want to use three character loras to generate three character in one picture?
>this is posted >suddenly /g/ got really quiet >no back talking >no samefagging >no pajeet quality posts is it finally over?
>>20462 I remember someone made a webm where he showed off prompting multiple generic girls and then inpainted each individual girl with a different character LoRA Mixing LoRAs to my knowledge would not help you generate multiple characters in one scene. Regional prompting might be another way to accomplish that.
>>20463 snitched about what?
>>20464 How do I use regional prompting for that?
>>20453 Why should we care about something so insignificant?
>>20465 Honestly it can be so many reasons at this point But I think the shit he said last night might have actually made their way to the yes of someone who didn't like seeing him mouth off
>>20462 Just use regional prompter in latent mode https://files.catbox.moe/2ig495.png
I see this in master when I pulled 254be4ee Add extra noise callback But how do we use it?
>>20468 >But I think the shit he said last night Out of the loop, what happened?
>>20471 said he would kill auto1111 (the UI), but can be framed as wanting to kill auto1111 (Voldy) as that is his git username
>>20447 Here's v1 https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/rUkHjK6C Not great at the moment, probably have to do some pruning
>>20472 >>20471 >Could work together >Could use the relationship as friendly rivalry >Obsessed with shilling his UI everywhere and "killing" the competition sometimes I wonder if humanity is just doomed as a species
>>20472 >>20474 lmfao cumrag sounds like a petty jew (redundant) and acts like one too so
>>20474 Yeah I was trying to subtly talk sense into him because this retarded console war is just tiring and there could be a chance to somehow repair /sdg/. But yeah he simply decided to double down. Hope he got in trouble for this shit.
>>20476 The way he doubled down just felt like watching an angry child in the playground as an elementary schooler and being hell bent on breaking everyones’ toys. Although I will admit that whenever I see his avatar, I don’t even read his post, I just tell him to fuck off. Only when I see other people greentext some particular Interest take do I then go back and read.
>>20477 >The way he doubled down just felt like watching an angry child in the playground as an elementary schooler and being hell bent on breaking everyones’ toys. It really seems personal. Could he be so narcissistic that this is all due to a rejected pull request? Possible but it's so fucking bizarre.
>>20478 I mean, didn't vlad basically do the same? he was a notorious shitter in the a1111 repo before doing his "own" UI
>>20477 >>20478 I mean it's cuckchan People hold grudges for fucking forever over the pettiest shit so imagine if you took one of these autists that hold petty grudges and gave them savant-tier coding knowledge then that's ComfyUI
>>20480 >savant-tier coding knowledge Cumrag is affiliated with SAI, what are the chances that SAI does the coding and lets the sperg advertise it? Your company looks better in the public FOSSfag space if you "support" a project like that.
>>20480 I've had a group of autists that were absolutely mindbroken from when I used to shitpost in a /vg/ general and now they're out there desperately trying to seek me out in any post that reminds them on me like ack thinking everyone that ever disagrees with him is *kemi The True 4chan Experience :tm:
>>20482 Forgot to mention this was two whole years ago and they're still at it. For some reason.
>>20479 Yes but he shat the bed on a fork, but Comfy “made” a new UI. Emboldens him to act like a petty cunt at every turn even when dealing with valid criticism. >>20481 No he actually made it, he documented his shit on /g/ and there are screenshots and Desuarchive links.
>>20482 FGO or Fire Emblem?
>>20485 Hoyoverse is all I'll say
>>20486 Did you bait people into pulling Qiqi?
>>20487 No but I got baited into rerolling her when I started...
>>20488 lmfao
>>20486 >genshit I forgot ufotable is attached to make an anime project with them… fuck
>>20490 I would post a smug Hu Tao but I haven't played my wife ever since I pulled Ganyu. ... I haven't had a reason to play anyone after pulling Ganyu now that I think about it.
>>20480 >savant-tier coding knowledge is it?
>>20491 Genshin is actually pretty fun and I think the game has come a long way since launch now that it's in 4.0 but it has also attracted an insanely autistic fanbase AND haterbase both in EN and CN so good discussion about it is rare
>>20493 I mean dude built comfyUI and managed to get hired by a company so that's definitely a lot better than the average autist that is on neetbucks and spends 15 hours a day shitposting
>>20494 I only care about the girls
>>20495 there's a huge gap between a shitposter neet and a savant genius programmer
>>20491 The gacha rates are absolutely abysmal and the way the characters -don't- work means you sometimes ""NEED"" dupes. Multiple dupes. Of rare units. It's predatory as fuck and it sucks but there's a fun game underneath, it's as if Boredom of the Wild had good combat instead of just the dodge > flurry cycle. Just don't get too obsessed over stats. >>20494 I haven't played since 3.1 or 3.2, didn't log in until the Shenhe banner (fucking finally), got her then stopped logging in again. How's 4.0?
>>20498 >Just don't get too obsessed over stats. honestly, just play main story and bail until next update
>>20498 >How's 4.0? Idk, what do you like about the game or what made you quit? My personal opinion is that I liked Sumeru's archon quest more than the Fontaine one for story but Fontaine as a whole is a way better region and far more fun to explore. Furina is pretty cute and a fun character too and the underwater exploration is cool.
>>20500 >what made you quit Nothing in particular honestly. I absolutely hated Inazuma's map and puzzles so I was excited for 3.0. I was away when it came out and when I finally got to try it I found out that Sumeru was making my i5 shit the bed even more than usual so I just dropped it. Also I had zero space and couldn't keep updating the game like 10gb+ at a time. The last thing I remember is the not-Pokémon tournament. It was fun but between that and the card game I kinda realized they're running out of ideas and I really don't want the events to be "hey we're ripping off X popular thing this time".
>>20501 Then you'll probably enjoy Fontaine then if you can run it now assuming nothing in particular made you quit. I personally feel like Sumeru and Fontaine are pretty fun but I've never been a fan of desert areas in any videogame ever.
>>20480 comfy is clearly an autist but I don't get why so many people enable him. this whole thing really could be more of a friendly rivalry but instead we get this mess and actual shills
>only 5% of my dataset had image composition tags AHHHHHHHHHHHHHHHH
>>20503 I assume there's others with a grudge against A111 or his own friends
>>20504 so that's just 15000 saber pics out of 300k
>>20503 I clearly remember noticing that when word got out Comfy had become a Stability employee the backlash against his presence was swift, and from my perspective it looked like he was trying to hide it instead of disclosing his position. And since he already had a massive hard on to hate on Vold plus the backstory of Emad seething over Voldy, it felt like a recipe for disaster that was not going to resolve objectively.
>>20506 You joke but I think I have 28k saber images not counting alter lmao
>>20503 I just hate a shill and comfy shills are pretty annoying. I'm sure it's fine, I like what I'm using and not interested in moving.
>>20508 >>20508 nvm, I was WAY off
I started trying to watch vtubers after all the vtuber posting around these parts but the fake laugh/giggle a lot of these girls do every 2-3 lines really gets on my nerves like dear god please talk normally why are you like this
>>20511 I remember when Kizuna Ai became a thing it looked cool but soon enough this whole thing started feeling way too fake and commercialized for me to be able to enjoy watching it. Girl designs are cute though
>>20512 >way too fake and commercialized Oh... If only you knew how bad things really are...
>>20513 Wow they really took kemono friends and raped the corpse
>>20512 Some vtubers feel like they're actually being themselves for better or for worse Some vtubers feel like they're trying way too hard to be cute and come off massively fake and not genuine at all
>>20514 At first they raped it in order to squeeze a few more cents out of it. They've been raping it just for the hell of it for a few years, KemoV being the most egregious attempt at flipping off the fanbase. They keep shitting out mobile spinoffs and KF3 updates but no one is playing them.
>>20515 >Some vtubers feel like they're actually being themselves for better or for worse Pretty sure it's just them being more competent at acting
>>20517 Usually the ones that get to be themselves are the indies since they don't get slapped by management while corpo vtubers come off as fake. Especially Holo/Niji as the fakest.
>>20517 Most small 2views/indie vtubers do it just for fun
>>20518 I don't care if Botan's feet are fake, I WILL continue licking the screen.
>>20520 Is botan the one that actually plays games and is one of those rare specimens that is also good at them? I never watched her because she's JP
>>20515 >Some vtubers feel like they're actually being themselves for better or for worse Mori (for worse) >Some vtubers feel like they're trying way too hard to be cute and come off massively fake and not genuine at all Kiara
>>20521 Yes that one.
>>20521 I kinda hate how most vtubers don't really play games and mostly just talk or try to play games and suck so much that it's frustrating to watch
>>20522 Mori is so cringy in her tweets that I've avoided her content or stuff she's in due to it
>>20525 She's a literal wigger rapper and coal burner. Usually I just ignore the main/roommate channels but when you put no effort into hiding who you are it's pretty hard to ignore.
>>20524 I'm disappointed that there is no chubba that im aware of that plays DDR or Beatmania. I can understand Hololive because gaynami but indies can sign up for DDR Grandprix or beatmania Infinitas and stream it
>>20526 >>20525 fuck she was one of my go-to prompts when I went to vtai because I liked her design. it's over
>>20528 Even before I found out about her roommate I knew about this. The fact that Mori used to tripfag on on /mu/ makes me feel dirty
>>20529 Remember to keep roommate/dox stuff out and moderate yourself Just in case
>>20530 my bad
>>20530 Why? It's public info that's been out for years.
>>20532 People don't want their immersion broken
>>20532 Because doxxing is universally frowned upon and it'd be kinda dumb to get in trouble over something completely unrelated to Ai stuff? >>20533 Some of these streamers break the illusion themselves honestly
>>20533 >>20534 It's not doxxing lmfao it's literally been public info for years and the wigger never attempted to hide it. Literally just another pseudonym.
>>20535 reminds me of fags who would constantly joke about coco and kson but god fucking forbid you didn't hide it behind some ironic joke
>>20536 I don't know anyone that knew Coco and Kson that didn't already know.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.6.0-RC If any anons want to try this out and report problems here feel free. imo this is the most stable and performant release by far.
>friend sends me aztodio art >asks if it's ai >mrw
so many unanswered questions in the /h/ threads, no one cares to answer them anymore lol
>>20538 >DPM++ 2M SDE Heun Exponential DPM++ SDE Euler Exponential Karras Ancestral when?
>>20541 Featuring Dante from the Devil May Cry™ Series
Is clip skip xyz not working fixed on new release?
>>20540 What questions? but anyway, people get tired answering same questions, some sort of faq is really needed. I'd imagine that it is extremely confusing for someone to get into this right now, especially if you are tech illiterate
>>20456 Thanks, I will try redoing b64 recipe with newer hlls and see if it works out
>>20456 Also you could upload all of them to huggingface I think? Their storage seems bottomless
>>20545 I wonder why based anon never tried doing the exact same thing but with newer HLL and optionally one of the better AOM2H derivates
>>20538 Seems like controlnet depth_zoe is broken and throws an error instead of working.
see you next week, a pretty big NMS update just dropped
>>20547 >one of the better AOM2H derivates Which ones?
>>20541 All the samplers that actually support ancestral are added as separate samplers. Karass and exponential being added as "separate" samplers is still a really bad decision imo but we must live with it for the time being. >>20548 Well that'd be related to the ControlNet extension
>>20551 >Karass This was not intentional I swear
>>20546 I’ll set something up when I can
>>20547 >>20550 He was trying to mix his own AOM derivative after getting shit results with AOM3 before he disappeared.
I thought this was 1.6.0 but turns out just running git pull is not enough
>>20541 >>20542 Special Edition Legendary Deluxe Edition Game of the Year Edition
(3.04 MB 1280x1920 00026-1201375210.png)

>just found out Magni and Vesper are graduating >inb4 "muh males" I actually liked Vesper...
>>20558 >muh males yep, they're ok but the fans are cancer so I'm not sure how the backlash is going to be, Kiara of all people had her chat spammed with questions about em
(3.09 MB 1280x1920 00031-3925607224.png)

(2.80 MB 1280x1600 00039-3557167234.png)

>>20559 From what I read she was the first to stream post-announcement so it was kind of inevitable she was gonna get hit with the brunt of the questions. I hadn't even been keeping up with chubbas since December so this was just a unfortunate surprise. Also remembered why I don't step outside of /vtai/.
catbox is kill
Litterbox is fine
we're back
>furry TPUs run out in september AI is over
off-topic but what's the deal with the maid anon on /g/? what is this maid shit? is this some tranny larp? all of his posts read like fucking hieroglyphs someone is accusing him of embedding malware and pizza in the shit he posts
>>20566 What the fuck are you talking about?
>>20567 This guy, every time I visit /g/ he has a thread up or there's always a "maid" thread.
>>20473 Haven't checked in in a few days but thank you
>>20568 just a larping dude shilling his crap it seems. There's no shortage of schizos on /g/ I thought you were talking about maid anon from here
>>20570 >I thought you were talking about maid anon from here I thought I made it clear enough that I wasn't referring to our maid anon kek
>>20565 So was it a free ride or what? Wonder if there is a way to get on whatever program he was on
>>20572 I don't the the exact specifics but it does seems to be google tpu gibs >could try snagging cloud resources like lodestone and drhead; the barrier of entry seems to be "can you fill out a form?" what that comment doesn't say is the second barrier to entry: using TPUs and JAX
>>20572 found the link, it's google TRC (TPU Research Cloud) https://sites.research.google/trc/about/ it literally is gibs
>>20574 >furry porn research Here's one of the report he sent them https://docs.google.com/document/d/1StIQKi4PxZ5j8otyDafN8uow4ayB9KDrnivVWD86hvs
>>20573 >>20574 >>20575 I'm gonna read this up and see if I can jump in on this so I don't have to spend 3 days waiting for a model to train
>>20573 >>20574 >the barrier of entry seems to be "can you fill out a form?" Unironically true
>>20577 That was easy
>>20578 wtf lol
>>20578 literally that shrimple you just have to never mention what the model is trained on and voilà
>>20578 did you really get google gibs with job title being "Glow nigger assassin"
>>20578 hahaha what you going to use those TPUs for?
>>20580 Now its just figuring out how to get Kohya or alternative working on this thing and do a test run >>20581 That was just a bit, I didn't use any of the Terry Davis info on the application, and I'm glad I did because it was not an automated processes. I waited about 3 hours for approval. If someone wants to make random David Davidson and Jane Doe accounts to get free TPU you are welcome to try. >>20582 R&D my finetune on something beefier than a 4090. Having to wait 2 days just to see if certain training settings work or not without being able to use my PC during that time and have my GPU cook my room is not something I could endure pulling off.
>>20583 >not something I could endure pulling off consecutively**
>>20583 >Kohya you can probably run pytorch on TPUs, but you'll need to change some things in khoya Best way of doing it is probably getting the jax scripts from the furry trainer, it's better for TPU, but it's a bit different from pytorch
>>20583 based
>>20585 >>20585 >proprietary code work better for proprietary hardware of course it does, and I couldn't find anything other than: https://github.com/lodestone-rock/SD-TPU-trainer https://github.com/lodestone-rock/tpu-trainer-stash but they havent been updated in forever, probably have to contact Lodestone directly
>>20587 >>20587 must get them pregnant!
>start checking the furry's profiles out of curiosity >see that he has a public cash flow graph, probably for any fees he needs to pay google >denominated in Rp IDR Really? He's a Seanig?
>>20588 yep that's probably for the best
>nigger and bestiality lover on /h/ reported my post calling him out lmao
>>20592 20 bucks it was AnimationAnon that got you
>>20589 Just gotta make anime real first.
>>20424 What's your method for weeding out misbehaving tags? Working on a general concept LORA, and it's getting there, but I have a lot left to learn and there things I wish I'd known from the start. Like if you have group sex images in your dataset that share tags with non-group sex images they can bleed over and generate some real Cronenbergs. Those cooperative fellatio images really fucked a lot of the versions.
>>20590 I always thought you faggots were /pol/tard memeing about most SD people being from India until I looked at how many Indian actress loras were uploaded CivitAI, now im gonna feel weird about using a model that was probably merged using some poo's technology.
>>20596 Stereotypes exist for a reason, and its not just /pol/, /biz/ also absolutely hates having to deal with pajeets and their scam coins. Although in the case of the above, IDR is the Indonesian Rupiah, so Furry model man could be Indonesian.
>>20595 My main form of testing has been running other anons' catboxes through my model and seeing how the results compare.I will start poking and prodding the prompt, usually remove all the boomer prompting or word usage that are not booru tags and then play around with what is left. If I get something screwy I will then start shifting or removing tags until the abnormality is gone, then I will try to use the suspected tags on new compositions to see if the problem comes back. In my current case, image composition such as portrait, upper body, cowboy shot, were not responding to my prompting even when forced so I checked my tag numbers and found out only 5% of my dataset had those tags so I need to pretty much tag almost all my images with that info and then test on the next model bake.
>>20598 Interesting. Thanks. Do I really need to go and tag all images for composition? Is it that big a deal? I've been doing POV, from behind, from side--the obvious ones--but not cowboy shot, dutch angle, upper body, etc etc. What about tag overlap? Like ass grab, or something with a lot of different applications. A character grabbing their own ass, someone else grabbing their ass (possibly from behind), that's at least 3 tags right there just off the top of my head. Would you tag all 3 if applicable, or just condense them to one and standardize it across the dataset?
>>20599 >Do I really need to go and tag all images for composition? I'm doing a full model finetune so I need to make sure that's on point, with a LoRA you don't need to worry about being that in depth. The only time I would consider making sure something like that is tagged is if you were making a character or maybe artist LoRA and need all the angles covered. >What about tag overlap? >Would you tag all if applicable Use the booru wiki to make sure the tags your image have are actually the correct ones. https://danbooru.donmai.us/wiki_pages/help:home So in the case of ass grab https://danbooru.donmai.us/wiki_pages/ass_grab >Grabbing an ass with a hand. >See "grabbing own ass" or "grabbing another's ass" for characters grabbing their own or another person's ass, respectively. If your concept LoRA needs to be specific, I would probably not tag images with the generic token to avoid problems.
>>20600 >I would probably not tag images with the generic token to avoid problems. Thankfully I did know about the booru, but what I'm getting from this is to avoid the generic tags if a more specific one is available. I'll condense and hopefully that will yield fewer flesh monstrosities. Thanks.
(733.76 KB 1776x1216 catbox_onp7at.jpg)

(699.25 KB 1776x1216 catbox_t1od7v.jpg)

(677.81 KB 1776x1216 catbox_gjm80u.jpg)

(610.65 KB 1776x1216 catbox_pxmu8a.jpg)

were Alice and Moh shuwuu from Shin Megami Tensei uploaded to Civitai or did I dream that? not on SD4fun either....
>>20605 >Yoga pants They are nice but they should be banned from commercial gyms, I'm extremely happy that I now train at a private gym.
>>20606 >should be banned Yes along with speedos at the beach. Also, there is so much talking in this thread. We're discussing interesting stuff but It would be nice if more images were sprinkled in between
>>20608 Slow Saturday, slow weekend, slow week, slow holiday month. Too hot to generate stuff. At least the talk is productive.
>>20608 >speedos kek and I have been too busy lately to gen as I'm now getting into a new trade.
>>20608 Working on my model and the google gibs shit Gomenasorry
>>20611 ufotable-anon can you post some Archer holding twin swords looking badass plaes
Getting twin swords and holding them are still an issue it seems, also reminded me I need to attempt working on specific swords like Kanshou and Bakuya, got images from artbooks and FGO assets, just haven't put them in the model yet. Got to also work on Archer in general, I tried to Img2img the smug redman image and it did not translate well like the other VN CGs I did. I'm running low on hard drive space from all these images... >Bonus Unlimited Blade Works BG, done on one of my earlier model revisions from back in March
>>20613 hehe looking good
>>20615 I like it for 2 girls but it can be a pain for 3 and more.
>>20615 these aren't bad at all, seems like its worth the effort
I guess it's because I had this in mind and had to scale back
>>20618 Not mine but you reminded me of this
>>20619 damn, the patience... impressive
>>20608 I can't post any of my gens since I'm testing my futa loras.
>>20608 same as >>20621 but with fat loras
>>20610 damn I thought you were a neet4life
>>20623 I never was "full neet" I worked when I got called in but that wasn't very enjoyable.
This time it's a style lora for Kamita It might just be the seed but it seems this lora might have inherited some inflexibility in expression. https://mega.nz/folder/dfBQDDBJ#3RLMrU3gZmO6uj167o-YZg Besides that, I am thoroughly enjoying mixing the hunyanyan style lora with other style loras, especially menma and yousaytwosign (posted respectively, with another menma mix after)
>>20622 are there any fat loras that aren't disgusting? I'm not a fatfag but this https://nhentai.net/g/117657/ i need this.
(1.18 MB 1024x1536 00010-2586716329.png)

(1.74 MB 1024x1536 00091-2253004274.png)

(1.76 MB 1024x1536 00036-544055515.png)

(1.80 MB 1024x1536 00475-883705455.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added fangxiang cuoluan. took a break to finish all the payday heists on dsod before payday 3 drops and it's nice having some of the novelty come back
(1.26 MB 1024x1536 00056-4095114064.png)

(1.62 MB 1024x1536 00245-2942001158.png)

(1.67 MB 1024x1536 00378-2661947747.png)

(1.88 MB 1024x1536 00066-499898467.png)

>>20627 Oh god PD3 is coming out too I really hope Starfield isnt long (spoilers:it wont be)
>>20629 will be*
>>20627 neato, I'll try it later.
>>20629 I'm gonna play the story exactly once then mod it out of the game.
>>20632 i'm gonna wait three to five years for bugfixes
>>20633 Eh, your loss.
>>20634 don't think it's going to be, to be honest
>generated traditional media almost exclusively when i was a NAI paypig >never done it on local for some reason >decide to try it again with my own lora >horrors beyond my comprehension don't uh don't try to unfuck a lora with OUTD
>>20626 You're definitely a fatfag, Anon, but it's OK. You do you.
>>20626 that artist is on my lora to-do list because of this exact doujin, incredibly based tastes but yeah, there aren't many fat loras, and they aren't that good (in quality or control), mostly because the people doing these are into actual obesity oldest one is https://civitai.com/models/48197/obese-girls-or-concept - it's okay but very stiff in image composition (girl is focus of the image, centered, poses don't always work out well). Makes good bellies if tuned properly, but struggles on thick thighs... a newer one is https://civitai.com/models/111635?modelVersionId=52816 - it's quite nice, much less stiff than previous one, pretty compatible with other loras, but it's hard to finetune the belly to be in a fat but not unreasonable state, and it's not consistent (especially with some shit like "from below" closeups where you need to use weights that normally produce immobile blobs of fat) newest one, which is supposed to be the previous one but better, is https://civitai.com/models/121416?modelVersionId=138856 - Haven't used it much because the first time I tested it, it affected my style quite a lot unlike the previous one. These days I mostly use BGV5 + obesegirlsconcept if I need a better belly, but making (reasonable) fat + good pictures is pretty hard to get, and the statistical variance in actual weight is horrible, the only reason I don't delete most of the gens is because it's probably fuck up the numbering system on outputs. I'd gladly do so otherwise.
>>20638 >>20626 You guys might try the various slider LORAs. https://civitai.com/models/112552/weight-slider-lora https://civitai.com/models/4602/waist-slider-microwaist Can't speak to their efficacy as I'm not into that shit, but they're worth a shot.
>>20625 some more kamita gens Is there a lora that makes pussies look, well, nice on spread-leg generations? puffypussy doesn't really seem to do the job
(1.76 MB 1024x1536 00002-2216579749.png)

(1.51 MB 1024x1536 00002-2585274950.png)

(1.61 MB 1024x1536 00023-824635116.png)

(1.76 MB 1024x1536 00024-824635120.png)

naked coats
>>20641 what's the style for third one?
>>20641 naked shirt and naked coat are so damn good >>20625 really liking #2. will have to try this out after I'm done generating previews for all my current loras
(1.20 MB 1024x1536 00017-3237697821.png)

(1.69 MB 1024x1536 00013-473888293.png)

(1.78 MB 1024x1536 00038-2866773371.png)

(1.41 MB 1024x1536 00041-3130485399.png)

and serafuku >>20642 https://files.catbox.moe/yxdrd7.png they're all the same style lora mix
>>20645 >ONNX model lmao Isn't this basically the same as tensorrt shit? Needing to make an onnx model makes it so you can't use loras
>>20646 No loras? RIP to AMD then
>>20647 I mean technically they can use loras, they just have to merge them into the model and run the conversion everytime they want to change something
I was wondering why my shit was coming out not quite fried but very contrast-heavy and then I remembered I loaded an old NAI gen with the cfg set to 11 and B64 seems to dislike that most of the time
>>20649 On another note I noticed that NAI negs and 7th layer negs produce completely different results and I think I've made a horrible mistake by sticking with the latter
>>20650 I end up having to tweak negs and CFG for each model I try.
>>20651 7th V2A is the first non-NAI model I tried and I never moved on from its recommended negs since they seem to work well enough on all models Turns out they REALLY give the AI tunnel vision, especially when it comes to style tags like traditional media.
over 9000 hours in mspaint inpaint
>>20653 STANDING HERE
>>20652 >>20650 what are the negs?
>>20653 Very nice
>>20655 https://huggingface.co/syaimu/7th_Layer (worst quality:1.4), (low quality:1.4) , (monochrome:1.1),
>>20652 For minimal interference, I use this neg: >lowres, terrible quality, worst quality, low quality, missing fingers, blurry, depth of field Current neg on B64: >(worst quality, low quality:1.4), (EasyNegative:0.8), missing fingers, blurry, depth of field, (multiple views), (monotone), 3D, (mutated, mutation, malformed:1.1), (poorly drawn:1.2), abnormal, bad face, bad body, bad anatomy, (bad eyes), bad hands, (bad), disfigured, error, lowres, terrible quality, worst quality, (((deformed))), disfigured, (extra_limb), (poorly drawn hands), fat, greyscale, monochrome, (fused parts:1.2), huge breasts, tall woman, female pubic hair, fat, wide hips, old, mature, (2girls, multiple girls:1.2), Months ago, I spent way too much time looking at the DAAM for negative tags and seeing if they actually did what they implied. But after a while I just gave up and embraced the voodoo and now have a long neg. I think I need to prune it one day though, it's maybe reducing flexibility. It is pretty interesting though to run this and get a sense what the negative embeddings are avoiding: >bad_prompt, 1girl
>>20658 The minimal one is mostly similar to NAI's from what I can see >(worst quality, low quality:1.4), (EasyNegative:0.8) That seems like a really bad idea
>>20659 On CFG 5-8 it's fine for me. Above 8 it starts frying. EasyNegative is the best negative embedding I've tried so far.
>>20648 >remembered how slow that shit was on my RX 6600 when I used only one lora or used hires fix at all. Man it sucked.
>>20658 for me it's either (worst quality, low quality:1.4), 3d for coom gens where I want juicy bodies worst quality, low quality, normal quality for non-refined and more 2d look
(5.27 MB 2560x2359 tmp086ikqgt.png)

Thanks to the ControlNet author they actually figured out a flaw in the sampling process that didn't make it's way over to the webui. I'm not math pilled enough to understand exactly what it does but I think it's applying some sort of normalization to the first sigma that probably smooths out the sigma curve, or something. I wasn't expecting this to actually achieve better results either but in my tests it consistently does, albiet fairly minor. Note the amount of knuckles and bikini ties in picrel. Also some minor details like the building and tree foliage. https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12818
>>20663 I recognize this Reimu.
>>20664 one of the rare times I visit /sdg/ happened to be one of the rare times I also see a nice pic there
>>20665 Lol. Thanks. And considering the lora hashes, I didn't disable it, you can see all the other hashes in the first catbox I posted. I'll try updating webui later, although I usually prefer pulling when it's pushed to master
>>20666 lol I honestly figured you were probably here. Also that's fine. I'm going to double check if the hash actually gets added because I forgot I precalculate all mine outside the webui so it is possible it doesn't hash that one for some reason, in which case it's a bug that needs to be fixed
>>20667 okay yeah it does cache the hash in the webui for me. I thought it may have had something to do with the LoRA not having metadata since it's a diff/copier LoRA and that was tripping it up but it works fine. Maybe it was a bug in 1.5.0 but I'm a bit doubtful
I've noticed that my LoRA really doesn't like the traditional media tag (little variety, fried eyes, inconsistent wings anatomy) and Starfield is right around the corner so I guess it's time to build that new rig. (SURELY I'll actually do it this time, I've only said this like 40 times in the past 5 months) I tried to fix another gen's fried eyes by switching to NAI's VAE and it NaN'd when upscaling. Pain.
(7.93 MB 3072x3072 catbox_zh78vb.png)

>>20658 Currently using >(worst quality:1.4), (low quality:1.4), (monochrome, greyscale:1.1), EasyNegativeV2, bad-hands-5 getting decent results
(3.14 MB 1536x2304 catbox_h24mny.png)

no nigger, i won't vote in your patreon pool for art of my obscure waifu
>>20665 and what's up at sdg
>>20663 >>20666 catbox for this reimu?
>>20673 https://files.catbox.moe/31hw44.png (my post doesn't have the sleepy eyes because I mistakenly assumed it wasn't used since it didn't appear in the hashes)
>>20663 that's a nice little bump in quality. the pupils also look a little better. how do you activate this?
>>20675 sorry, I'm retarded I didn't see the pull request
>>20674 catbox is shitting itself last few days
>>20677 I have noticed this too and it makes me glad I have hoarded all LoRAs. Wonder if they're under heavy traffic from anonfiles being taken down or if just coincidence.
>>20678 hosting a file host seems like a miserable and thankless job
>>20678 Do you organize it in some way?
>>20679 Ah, well at least there's a reason. I should donate to them sometime. >>20680 I should have used the jdownloader scripts that were added to that torrent a long while back because no I otherwise don't organize these other than by character/concept/style (when I can)
(1.90 MB 1280x1600 00574-187244050.png)

(2.32 MB 1280x1600 00576-3405873573.png)

(1.85 MB 1280x1600 00578-3707122771.png)

(1.90 MB 1280x1600 00584-2454843886.png)

>>20678 We need like a way to archive and share the LORAs maybe a torrent, because those download link aren't gonna be up forever,
>>20683 >maybe a torrent oh.. oh man... that brings back some memories
>>20683 every time this idea comes up, nobody suggests IPFS even though 8chan like uh seven years ago fuck gave me the impression that long term decentralized file storage and sharing was the goal of the project is there something wrong with it? seems to work alright for /hgg2d/ for the once a year i check their index.
>>20685 >IPFS Yeah IPFS also works, anything that's not prone to being taken down by stupidity or glowies will do tbh
>>20686 Edit: and I only bring glowies up cause there's MEGA links that are dead due to "child exploitation" reasons on the gay shit repo
>>20684 >oh.. oh man... that brings back some memories people use torrents all the time even nowadays lol. it's not like he mentioned actually ancient crap like emule
>>20688 I'm talking about the /hdg/ LoRA/embed torrent from like half a year ago. It was such a shitshow.
>>20683 Maybe someone who hoarded all the usable loras from gayshit and civit can do this, but I wonder how huge that torrent would be. 500gb at least probably?
>>20690 500Gb would be a conservative estimate, since there was a time when 128dim was the norm and a good amount were usable.
>>20690 the old torrent was like 240gb and that was from the end of this january probably a couple terabytes now
>>20687 >child exploitation cause the uploaders were dumb enough to actually upload nsfw example gens or datasets of characters that appeared underage; if they uploaded the safetensors files with sfw examples they'll probably pass without a problem >>20688 torrents are fine for small collections of files but become a huge pain in the ass for archiving large quantities of files, like the old /hdg/ torrent. eventually peers will want to pick and choose which files they want (instead being forced several terrabytes of shit loras or characters not wanted) and archival data loss might occur when peers with complete archives are not seeding. when civitai first popped up i thought they would be the answer to a definitive lora archive, but they sold out to monetizing the platform and god knows what goes on in their back end (and searching on their website still sucks). ideally i'd like to see a curated lora archive hosted on one domain, but centralized hosting can get expensive. if people kept their shit together and uploaded reasonable examples to maintstream hosts like mega, keeping a collection of links is probably the best bet
>>20692 >couple terabytes now usable loras would be 1tb at best imo. who needs 256 dim civit garbage
(2.27 MB 1536x1536 7df5t67ygui.png)

(2.33 MB 1536x1536 gh78hui.png)

(2.23 MB 1536x1536 g8yuhijo.png)

been on a long hiatus from burnout, but feels gud to proooompt again
(2.04 MB 1280x1536 catbox_7vpnvs.png)

(1.92 MB 1280x1536 catbox_blsjal.png)

(2.06 MB 1280x1536 catbox_rtqgli.png)

(1.43 MB 1280x1536 catbox_08ygjy.png)

>>20638 After some more bad experiences with the various fat loras I've dicked around and found that using the three of them at weight 0.4 works very well, I may even merge them and use only that, since I'm using 8 loras currently lmao tuned on ochako, didn't change anything for dawn and shylily (well, I changed the tits tags), so it's not as lora/characted dependant as before
>>20146 I'm surprised this artist got a Lora done before freaking Mangamaster
(2.37 MB 1280x1536 catbox_y84lh5.png)

(2.15 MB 1280x1536 catbox_x7kwgq.png)

>>20696 However, it still destroys backgrounds in general. I'll probably have to finally try loractl to fix that, but I'm not looking forward to it... without fat loras -> with fat loras
spoiler the fatties please
(2.35 MB 1280x1600 00078-3862360626.png)

(2.75 MB 1280x1600 00038-3140288693.png)

wip hansharu lora
no
>>20695 since the new NAI text gen model dropped I got back into prooompting as well. image gen + text is about as close to the matrix as we can get for now
>>20702 >new NAI text gen model Goddammit please tell me its leaked, if not welp time to pay pig again.
>>20703 no leak sadly. that GitHub zero day leak was probably a once in a lifetime thing.
>was gonna train model today, didn't realize I forgot to prep my HF3 data I did at the start of the month for training fuck fuck fuck fuck fuck
>>20706 is that the embed or the lora?
Are there any news on wdxl?
>>20708 >WD >XL lmao no
>>20707 the old full nelson spitroast lora
>>20706 ol' reliable mesugaki corrector
>>20703 subjectively i'd call it about as good as the nicer llama 1 33b finetunes, just with a good UI and very fast generation speeds compared to local.
(2.30 MB 1280x1600 00276-3447200371.png)

(946.07 KB 640x960 00180-2387033929.png)

(3.08 MB 1280x1600 00161-1083935760.png)

(1.96 MB 1280x1600 00272-1912000203.png)

Hands still fucked
(1.99 MB 1280x1600 00280-2291306637.png)

Also not sure what to do with hair showing up sometimes. I tried bald with some success and hair in negatives with no success but I want to avoid anything hair related if possible since it's a mask. I'm going to see if adding another mask tag like rabbit mask or even gimp mask might help but there's not much in the way of useful mask tags
I finally pulled (to dev) and was going through my extensions Are there any in here that I don't need? Are there any that I'm missing that I should get?
>>20695 lora and catbox?
>>20297 Bake from yesterday, running 9 epochs with 6 restarts Didn't really make it better. I may try doing a dim 256 abomination since I've been on dim 128 all this time and it could be the reason why it can't ingest that much information about the alt outfits. Tested it quickly and it should take 8h instead of 6h, so about 30% longer. I hope this works, because if even the forbidden higher than 128 dim doesn't I don't know what would. I've also tested the compatibility quickly and it works for most shit, but the loose cloth hanging from her main outfit seem to disappear very easily. Full grids: https://files.catbox.moe/pnib75.png https://files.catbox.moe/9he4or.png
>>20718 >Tested it quickly and it should take 8h instead of 6h, so about 30% longer. I'm retarded, what I changed was alpha, not dim fortunately by bumping down batch size to 3 instead of 4 I don't OOM but it'll take 9h, idk if I train it fully or try "upscaling" the current lora from dim 128 to dim 256 by extracting a lora from (NAI+lora) against NAI
>>20716 dynamic-prompts supercedes the wildcards extension which is moreso just meant as a reference extension implementation. Everything else seems fine. This extension is also useful but not super necessary either. https://github.com/cheald/sd-webui-loractl
>>20703 >>20702 My concern with new NAI stuff was they'd intentionally try to kneecap it's ability to do lewds.
4cuck thread looks a wasteland what the fuck man.
Tea + Cake >>20722 Haven't checked that place in months. Someone mentioned a catbox scraper? I'm interested I want to see if there's anything good to learn from there
>>20722 I have not gone back to /g/ in a wekk or however long ago since my totally legitimate banning for telling cumrag to kill himself But the clowns are trying troll /h/ right now. >>20723 https://github.com/space-nuko/sd-webui-utilities I think it's [hdg_archive.py]
>>20724 >vtai isn't touched by cumrags >is completely devoid of shitposting and flamewars Really makes you think.
https://news.ycombinator.com/item?id=37324683 https://a16z.com/2023/08/30/supporting-the-open-source-ai-community/ If someone actually knows somebody with the finetuning smarts to train models, this should be brought up to voldy to see if he's interested. The HN comments indicate to me they might be willing to do this for the SD community and voldy is high-profile enough he could easily get a grant if that text webui knockoff dev could.
>>20724 Thank ya!
>>20728 What a brat 💢💢💢 Requires immediate correction
>>20721 I do nothing but gen degenerate shit on NAI. the text gen is very cooom worthy. my current story is ~38k words. shit, that's more than I expected actually.
>>20729 >Requires immediate correction Yes but that's sadly for another day.
>>20721 >run by coomers and waifufags >one of their first contributions to the field of ML was finetuning gpt-j-6b on literotica and getting a noticeable bump in coherence >(claim to) be running an encryption scheme on their backend that lets them credibly deny knowledge of anything that's done with their services >currently positioning to open a chatshit service with the explicit appeal of being uncensored the only censorship i could accuse them of doing was how far they went to strip the ability to generate 3d out of their SD finetune, and seeing what happened to the printing press and that other guy who hosted SD for halfchan, they were entirely correct about what would happen if they didn't
>>20728 She gives me the finger I give her a tomboy that can teach her a thing or two
>>20732 Yeah some of its founders were former 4chan posters that only started the project because of AI Dungeon's lobotomization iirc, I think they just did that so people wouldn't be all "THEY'RE USING AI TO MAKE REALISTIC LOOKING FAKE CP" or whatever since it was still very new tech at that time.
(2.01 MB 1024x1536 catbox_ckjqqg.png)

>>20734 they've been pretty successful at staying under the radar so far. we'll see if that stays true when they release their chatshit. oh look /aicg/ got their latest gpt-4 hijack revoked and imploded again.
>>20722 >>20726 ufotable guy here, I can check this out later and see if its even something that could be taken advantage of. I still haven't started my TPU gibs and I'll probably be learning a lot more during that time, assuming the TPU training is faster than my 4090, because I'll be able to run through multiple settings and changes and shit much faster.
>cumrag niggers squealing for peace on /h/ lol lmao
(1.37 MB 1024x1536 00101-3036427239.png)

(1.33 MB 1024x1536 01032-3778490808.png)

(1.66 MB 1024x1536 00078-1683624300.png)

(2.01 MB 1024x1536 00017-2821080713.png)

(1.61 MB 1024x1536 00096-1915043753.png)

(1.63 MB 1024x1536 00195-2188472132.png)

(1.96 MB 1024x1536 00389-2239396344.png)

(1.92 MB 1024x1536 00005-536420961.png)

>>20739 Finally, thank you. Hopefully will have a test run today. >>20738 If cumrag himself is actually willing to stop acting like a little cunt, I'd take it.
>>20742 >If cumrag himself is actually willing to stop acting like a little cunt, I'd take it. Challenge: IMPOSSIBLE Because he is still avatarfagging on /g/
>>20743 Avatarfagging is a given in that thread considering jannies are too busy dilating. If he just stopped doing the snarky remarks and fueling the shitposting that could be enough but I don't believe he's able to contain himself

(1.88 MB 1280x1600 00124-3889825473.png)

(574.55 KB 640x800 00114-854201261.png)

(744.54 KB 640x960 00088-1201873138.png)

>>20746 fkey?
>>20747 fkey 0.9, "sttabi_v2-03" 0.6
someone in the /h/ thread mentioned prompt editing loras in and out and this has opened previously unimaginable vistas of bullshit to twiddle
Can we decide when we can apply each lora individually or is it still just a global toggle?
>>20750 what?
>>20752 >sd-webui-additional-networks this is long deprecated, but that'd make for a nice PR if a codecel wants to just add a keyword like applyto=input/output
>>20753 >>sd-webui-additional-networks The suggestion was for the extension but it doesn't really matter, I'm sure there's a similar issue buried somewhere for a1111
did sd4fun dead?
Where do you find the img2img extra noise option in a1111?
(1.60 MB 1024x1536 catbox_u51o6f.png)

(1.49 MB 1024x1536 catbox_6rbs44.png)

(2.07 MB 1024x1536 catbox_f25c1q.png)

(1.70 MB 1024x1536 catbox_ci8kpm.png)

fkey is so good
>>20732 >>20736 >>20746 office loli so good!
(399.87 KB 1024x1536 catbox_42eyjt.jpg)

(432.52 KB 1024x1536 catbox_pl7o8v.jpg)

(446.37 KB 1024x1536 catbox_hijsqz.jpg)

(576.44 KB 1024x1536 catbox_h7wzyw.jpg)

>>20758 thanks, was also looking for this >>20759 >>20748 my fkey gens come out very different, guess it needs the extra lora also, middle finger with pure chance lol
>>20761 there are a few fkey loras, mine has the hash e774865004aa
(396.97 KB 1024x1536 catbox_16bnx6.jpg)

(554.77 KB 1024x1536 catbox_143r7r.jpg)

>>20762 did you get off of gitgud? just pulled v1.6.0 and gens look quite different! but this extra noise seems really nice, need to play with it more.
(13.76 KB 181x181 1683192786714170.png)

>>20732 >>20736 >>20746 Will office worker lolis be a thing in our lifetime bros? Damn off the clock slacker brats need correction!
>>20763 i think it might have been posted in these threads, actually. let me know if you want me to upload it. loras that have some graininess or texture to them also get wildly different outcomes depending on upscaler and denoising strength. i can't img2img upscale most of my pictures because they come out with a completely different vibe.
>>20765 yeah, this v1.6.0 seems more "accurate" in the gens? but then some older loras start looking like ass. also finding that extra noise starts adding a photographic look on some gens. but definitely going to spend time learning how this all works. haven't even started on all these new samplers. please upload, neither 512px (12ddce59a0ce) nor 768px (a086da16fab2) on the gitgud have that hash
why is the adetailer gook dev so incompetent his extra noise parameter gets picked up as override for the whole gens when loading a previous prompt
>>20766 https://files.catbox.moe/8w5yk0.safetensors whenever there's a big change like this, i find it's best to try stripping my prompt back down to the bare minimum and rebuilding with what works
>>20768 thanks anon! yeah, will have to return to the drawing board. some good hours will be wasted this weekend
(1.43 MB 1280x1600 00389-1546929879.png)

(1.38 MB 1280x1600 00391-2713614902.png)

(1.46 MB 1280x1600 00418-2572106932.png)

(1.40 MB 1280x1600 00435-2693347688.png)

>>20764 I definitely have to prompt more office lolis
>>20770 Yes I will do this as well.
(2.15 MB 1280x1600 00063-2647614796.png)

(1.97 MB 1280x1600 00066-2353960926.png)

>>20771 When you want hair tie in mouth and necktie and the AI gives you both
>>20687 >>20693 Truly it is time for LoRA@Home
>>20697 He makes some nice stuff but there's nothing desirable to me about his particular art style that motivates me to make a lora
(1.22 MB 1024x1536 00255-3525948980.png)

(1.19 MB 1024x1536 00201-2253310917.png)

(1.46 MB 1024x1536 00056-4235615577.png)

(1.70 MB 1024x1536 00005-1006304687.png)

lol the state of /hdg/ makes me not want to even bake the next thread. sucks man https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added noodlenood
(1.54 MB 1024x1536 00020-1964780425.png)

(1.57 MB 1024x1536 00050-91958168.png)

(1.64 MB 1024x1536 00116-2994616843.png)

(1.62 MB 1024x1536 00123-91958168.png)

(1.43 MB 1280x1920 00229-1162554654.png)

(1.34 MB 1280x1920 00028-573902489.png)

(1.99 MB 1280x1920 00029-573902488.png)

(1.52 MB 1280x1920 00115-670912598.png)

gelbooru being down also sucks
(537.88 KB 910x497 1.png)

>>20757 >>20758 Anyone tried it out yet? Github Wiki example uses 0.2 extra noise on top of 0.45 denoise, the latter of which is somewhat low and can result in fuzziness at times. As a layman I also don't get how extra noise works and how it's different from denoise. My understanding is that they work differently, extra noise just adds more noise, and denoise is just that, removing noise? At face value the extra noise seems to drastically improve gens in the example.
(5.93 MB 2304x3328 finalUpscale-fix.png)

>>20778 Yeah it just adds a bit more extra noise. It's identical to NAI's implementation. See >>19955 as another example.
>>20778 >0.45 denoise, the latter of which is somewhat low and can result in fuzziness at times. Is it? That's what I usually use.
>>20780 It's odd that adding more noise results in what looks like less noise (less fuzzy, higher fidelity/quality) in the final image, but I ain't gonna question it too much. Looks great to me, can't wait to try it out.
>>20781 I guess it depends on the model. 0.45 is the threshold where the gens start occasionally giving me blocky/fuzzy stuff.
>>20782 Well it's because of what you described, it forces the denoising process to try and guess more details because the image is noisier. GAN upscalers usually smooth shit out so it helps alleviate that. And latent upscaling is very subjective because essentially all you're doing is giving the hires fix pass (what is simply img2img) a really noisy version of the image, which is why it requires such a higher denoise strength. There's good reason voldy initially removed latent upscaling completely many months ago because logically it would make results worse, but people liked it so he re-added it a bit later after some outcry. https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#upscalers This is all to say that new extra noise parameter is like a cross between the two as the wiki describes.
jcm2 style lora It seems I might have unwittingly included enough images of ponytails that it shows up when unwanted. Throwing in hair_down eliminates that for what I do just fine, but I can't speak to how much it would interfere with characters that neither have a ponytail or their hair down. This also has the same issue that I'm sure every flat-color artist style lora has with checkpoints that bias towards full shading art, I use '((simple coloring, simple shading))' like with the hunyanyac lora, to mixed results. I've been wondering if this is due to my usual settings (dim alpha 32/16 lion) that I haven't switch from in a very long time. https://mega.nz/folder/dfBQDDBJ#3RLMrU3gZmO6uj167o-YZg Maybe I'll do a v2 if it turns out to be broken or pisses me off enough.
>>20783 >giving me blocky/fuzzy stuff What upscaler do you use? 0.45 seems fine for animesharp/remacri. Doing an 2x upscale with this denoise can leave the eyes somewhat muddy but you can inpaint that, other than that I don't really notice GAN artifacts with this denoise. >>20785 Thanks! Looks interesting for flat styles. Although I feel like jcm2 has more western art look
>>20663 judging by the discussion in the pull request it wasn't actually an improvement after all?
>>20787 There was a ton more on the dev Discord but yes it essentially turned out to be placebo lol
>/g/ is months behind on topics already discussed by /h/ >/g/ is finally a fucking graveyard kek
>>20789 /g/ is discussing something? that's already impressive
>>20790 Not if its the usual idiots retreading old ground. But yes it is a nice change of pace for the time being.
I spoke too soon, its back to SDXL bullshit
>>20788 sadness. was very hopeful from that initial screenshot
>>20789 I saw that, /h/ was making strides while /g/ was just shitflinging lmao.
>>20785 Just noticed you're using underscores in your datasets, you do not want to do that because 1) NovelAI didn't train with them and 2) it wastes a token for each one
>a9fed7c364061ae6efb37f797b6b522cb3cf7aa2 is it safe to pull?
>>20797 wait a bit, pulled to 1.6.1 and right off the bat it gave me an "model loading" issue
>>20798 Waiting doesn't help if there isn't an open issue for the problem you're having.
>>20799 Yeah tbh I don't know for sure if its just my setup I when I pulled to the latest version, I pulled back to 1.5.2 just in case. But I'll probably pull to latest if its a fluke on my end or something.
(1.40 MB 1024x1536 00108-3373040024.png)

(1.53 MB 1024x1536 00440-3096821961.png)

(1.74 MB 1024x1536 00017-2270344894.png)

(1.78 MB 1024x1536 00098-1846306102.png)

(1.67 MB 1024x1536 00014-1553979970.png)

(1.51 MB 1024x1536 00339-2311338711.png)

(1.60 MB 1024x1536 00027-792902246.png)

(1.87 MB 1024x1536 00029-2666812839.png)

>>20801 >>20802 Damn you really are churning these out.
(1.81 MB 1280x1600 00243-2618132478.png)

>>20785 I'll have to give this a try to see how it compares to my jcm2 lora
(2.89 MB 1280x1600 00253-2618132478.png)

(1.85 MB 1280x1600 00247-2618132478.png)

(2.25 MB 1280x1600 00251-2618132478.png)

(1.87 MB 1280x1600 00249-2618132478.png)

>>20804 Seems good, mine's a little weak comparatively without latent mode
>>20803 i finally got the combination of free time and motivation to clean through some of these datasets that have just been sitting unorganized/filtered on my bulk storage drive
>>20806 Coolio, more style LoRAs never hurt.
A1111 removed the apply styles button ffffffuuu
>>20808 there must be another way to transfer prompt settings, right?
>>20808 I never fucking used that lmao
(831.67 KB 640x960 00319-0.png)

These character/artist loras are fun to make
catbox is die? original lowra:0.5 (anything higher is cursed) LowRA:0@0,0@0.25,1@0.26,0@1 the more control I get, the more time I waste for what might be placebo improvements
>>20809 my workaround is to gen once and then copy the neg out of the generation parameters printed below the image. >>20810 I like to load my usual neg and then tweak it per gen
>>20808 Isn't it hidden under the edit style dialog now?
>>20814 fuck me, you're right. I even looked in there but glanced over the icon. thanks anon
>>20808 you can add the small change here to a .js file if you really want it back outside the modal https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12959
what's new, interesting, or concerning in loli diffusion lately? haven't been active the last 3 months, mainly because i need a new gpu
Ended up having some computer issues this week so I'm a bit delayed on training the latest version of the finetune. I still haven't even made a dent in fixing my composition tags so not sure how this is gonna turn out. That being said, I'm gonna be reaching out to the furries and get some help with the TPU set up, parameters they have been successful with, and their procedure regarding extending the gibs usage. My 30 days start as soon as I send Google my project info. So I will do one last training on the 4090, see what needs improvement, in which areas, get a quick start from the furries, and I will begin pumping out lots of builds of the model, and see about getting a version available for feedback.
>>20817 nice, thanks!
Furries got SD non-XL sharding working now
>>20758 I don't see it is it something you have to enable it in settings?
>>20823 please try reading again
>>20788 aw come on
>>20821 very good news for people with multi-gpu setus
>>20824 Didn't realize it was a slider in the backend. Wish it was on front to change dynamically instead of ballparking what I want with it. Oh well new toys are fun regardless.
>>20827 There's an "options in main UI" setting under user interface. Maybe I'll update the wiki to mention that since I don't think many users know about that.
>>20828 I assume it's that dropdown part that says img2img - extra noise? And that applies it to hi-res fix as well?
>>20829 `img2img_extra_noise` is the exact field name you want to add. And yes it's for both img2img and hires fix.
>>20758 that's pretty nice good work catboxanon
>my cute 2views oshi is graduating right now >didn't know until 2 mins ago time to make some art of her
(2.56 MB 1280x1920 catbox_w4tuzf.png)

>>20835 IT'S NOT FAIR
>there are shondophiles lurking here
(1.48 MB 1280x1600 catbox_36fa2f.png)

>>20837 Is this a surprising revelation?
>WARNING:py.warnings:C:\\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torchsde\_brownian\brownian_interval.py:585: UserWarning: Should have ta>=t0 but got ta=0.031249985098838806 and t0=0.03125. warnings.warn(f"Should have ta>=t0 but got ta={ta} and t0={self._start}.") the fuck
(1.92 MB 1280x1600 00049-800814307.png)

>>20835 Oh what, Kana is graduating? Damn, she was cute. Are her twitch VODs all gone? I know they were there a few months ago but they don't seem to be there anymore.
>>20835 >Kana and Shondo I see you like mentally ill women
>>20837 I like watching shondo but for some reason I can't get hard to her. Same with Filian, I enjoy watching her but I can't sexualize her at all. Only vtubers I get hard at lately are Koseki Bijou and Kagura Nana but the only one I actually watch from these two is Bijou.
>>20841 She was having issues streaming on Twitch so she went back to YouTube. It's not like she was streaming infrequently either but this announcement came out of nowhere. I wouldn't have imagined it before now but she could be moving to vshojo with the wording of her graduation >>20842 Does watching Neuro also count for that? >>20843 >I like watching shondo but for some reason I can't get hard to her I get that, I feel the same for Korone
>>20844 Neuro is a robot at least and not a mentally ill woman
>>20843 Sexualizing vtubers feels weird to me in general
>>20846 idk some of them have absolute SEX designs
>>20844 >I get that, I feel the same for Korone couldn't be me
(2.13 MB 1280x1600 catbox_hs8w59.png)

>>20848 someone left a stray doggirl on your doorstep
If someone is paying NAI for textgen do they want to check if their imagegen VAE is still fucked in color desaturation or not? I'm just morbidly curious if they fixed it. Easiest way to check is upload something like a basic 512x768 image that's fairly detailed and saturated, put denoising and noise at 0, and compare that result to with one of the SD VAEs in the webui doing the same thing. In other words, check what encoding the image & decoding the latent looks like. The SD VAE decoded result looks a tad worse but it at least retains colors unlike the leaked NAI VAE.
>>20850 >The SD VAE decoded result looks a tad worse do you mean for img2img/inpaint as well?
>>20851 I'm really just refering to img2img/inpaint, yeah. Using the NAI VAE in txt2img is okay because it's not doing any additional encoding of an existing image.
>>20852 fun thing is that stability managed to reintroduce vae autism to sdxl as well, although to a lesser extent. those globohomo scanlines pop up in almost every xl model I've tried so you have to load external vae again now.
>>20768 Oh this is the fkey locon on civitai, they have the same hash https://civitai.com/models/67128/fkey-or-stylizedlocon
>>20778 The most I could go was 0.15 extra noise on 0.5 denoise. I'm not sure if I liked the result much, I usually inpaint things like faces or details afterwards anyways
>>20849 >Based64_hll5c Awesome gen, is this a 50/50 mix of based64 and hll5c?
Clip blend doesn't seem to be working anymore, did something change?
>>20859 I'm just retarded and haven't updated it for 6 months
>>20860 nevermind, still doesn't work even after update it changed though
>>20861 I think I know what's happening, the quicksetting doesn't work anymore for some reason, but the script itself is still applied, so when you launch SD the previous blend ratio will be used, but changing it won't be of any use until restart catboxanon are you there ? I think that was your script
>>20862 Yes, I'm looking into it currently.
>>20863 Thanks.
>>20864 Oh, I know what it is now. The webui caches the prompt conds when it can now by default, so if you want to quickly toggle that script for the same prompt you'll have to disable it. Look for "Persistent cond cache" in settings.
>>20865 Nice, thank you. I have literally no use for this apart for the rare inpainting since I use wildcards and adetailer which render this useless
>>20866 and now I'm running into another issue that somehow breaks my gens after doing new stuff with adetailer well, I guess changing models for that is experimental, but stil... Am I really going to have to use cumrag
>>20867 Sounds like an adetailer issue
>>20868 Actually it isn't. I just tested, after relaunching UI and never touching adetailer >gen on model 1 >swap to model 2 >gen on model 2 >swap back to model 1 >gen on model 1 The two model 1 pics are different and the last one is distinct from the second. Maybe this has to do with the "new" cache settings? This is extremely weird that swapping models breaks shit
>>20869 forgot to mention that the images I get are deterministic, the bugged model 1 is something I can repeat with this method, but loading it on a freshly relaunched UI makes the regular model 1 pic
>>20870 I'll try and come up with examples that don't involve large futa cocks and a bajillion loras after I come back from getting groceries
anything in the new commit that could make gens non-reproducible? my usual approach was to gen a ~20 batch of lowres, get the seeds I like and then hires fix that gen but now every time I do a 1 batch while reusing seed I get some changes with the gen. At least when using SDE Karras https://files.catbox.moe/22d9uy.png https://files.catbox.moe/dq3iuh.png I used to get identical gens with this approach before
>>20872 >Do not make DPM++ SDE deterministic across different batch sizes. setting is disabled and I never touched it anyway happens with euler a as well but didn't seem to happen with 2m karras. xformers: 0.0.20
>>20873 Does hll5c have the pointy ears bias like hll4 had?
>>20875 Not from what I've tried but I don't recall having that issue with hll4 though most of what I was prompting at the time was Gura and Ina
>>20869 >>20870 >>20871 Here, using a simple prompt, no loras, and set seed 1 1- Gen on model 1 (based65 here) : https://files.catbox.moe/nywwtj.png 2- Swap to model 2 (based64 here) and gen with same parameters : https://files.catbox.moe/ltafp9.png (but that's not very interesting) 3- Swap back to model 1 and gen again : https://files.catbox.moe/n2aaw3.png This is pretty concerning. No extensions are running and the only thing of note that I can think of is that I updated xformers a few days ago (using a pip command that built the latest version, nifty stuff), so I'm on xformers 0.0.22+748c159.d20230831
>>20877 I'm going to try downgrading xformers but it's a massive hassle because it was built automatically so I don't actually have a wheel to reinstall it later if needed
>>20878 Okay, it's not some newer xformers quirk (unfortunately) Here's the same images with xformers 0.0.20 https://files.catbox.moe/qla6w4.png https://files.catbox.moe/78yz84.png https://files.catbox.moe/b94vee.png This means that's a problem with the (new?) way checkpoints in ram are handled. Also, considering the base images I noticed this on (anime vs furry model where the style was furry but there were anatomy problems) I suspect that some parts of the model are not transferred properly or at all during this switch. Any codecel help would be appreciated since I'm a brainlet
>>20879 >way checkpoints in ram are handled I had a doubt so I checked if without ram caching it still happened. It doesn't.
(402.88 KB 1024x1536 catbox_oae049.jpg)

(319.21 KB 1024x1536 catbox_700q4l.jpg)

>>20855 ty. the locon makes a big difference for style it seems
just wondering if anyone already did tanabe kyou
>try out cumrag to try a specific workflow >get custom nodes >immediately pozzes my venv with retarded packages that have multiple == dependencies, uninstalling shit and then throwing errors I don't know what I expected
>>20883 Well of course you're not going to have a good time if you're going in biased. If you can't figure out simple node interface, Comfy is not for you.
>>20884 >If you can't figure out simple node interface this was mentioned nowhere in that post go back, shill
>>20885 i doubt they even understand what they're shilling
(377.46 KB 1024x1536 catbox_ye1vdm.jpg)

(309.88 KB 1024x1536 catbox_bd9pe3.jpg)

(413.46 KB 1024x1536 catbox_z7jknw.jpg)

(347.23 KB 1024x1536 catbox_4fs3k9.jpg)

>>20830 Doesn't seem to add anything bad but not anything worth it? I just get more graininess so far as I can tell. 0 -> .15 -> .3, I like the .15 so I'll probably leave it there just adds a little more flavor I guess. Not sure if it gets a scaling effect based on your original denoise I tend to go between .5 - .54 on denoising on the upscale.
>>20883 lmao what the fuck is that do cumrags really
>>20889 >textbox clipping
not sure what to make of the new samplers yet prompt: 1girl, loli, flat chest, topless, nipples, swim trunks, paw pose, smug smirk, one eye closed, cat ears, open mouth, red short hair, green eyes, sunset, sunrays, subsurface scattering <lora:Fkeylo(li)conv:1>
>>20796 Underscores and spaces were used so often interchangeably back when I learned how to make loras (and colabs all used taggers that added underscores), that I assumed they were handled basically the same way. At the very least, I don't think I've ever ran into any grave issues >>20786 Oh yeah there's a rule on this website about "western styled" art, isn't there? With all due respect I don't really care
>>20884 Fuck off cumrag
I made some small talk with the furries while waiting for some of my model shit to finish getting prepped before I could train. Got the new script that they are using to train on TPUs but said I could try testing it myself locally to get used to it before getting set up on Google, said that its device agnostic and can run on GPUs or TPUs. https://github.com/lodestone-rock/SDXL-sharding/tree/main Lodestone says his repo is unorganized so its gonna be disorganized chaos for the time being but regular 1.5 and XL stuff is labeled respectively. Lodestone is gonna try see if he can get an extension to include gibs for a TPU v4 pod (a huge a TPU v3 cluster) so they can do more advanced voodoo fuckery. There is also another guy that is planning on just doing a new 1.5 training that isn't furry exclusive and is just gathering the dataset but also got the Google TPU invite. And then for just any final bits of info, the 5 TPU v3s that you are provided by google if you jump onboard yourself are the equivalent to 16GB VRAM cards, but with sharding you can use all 5 to have 80GB VRAM for training, so around 3.3~4x performance over a 24 GB card. But since they just got 1.5 sharding working I don't think they have done that many runs yet.
>>20890 cleaning up the actual schizo workflow that guy made is horrendous muxers and pipes everywhere that make nothing simpler
>>20895 I think that at this rate, trying to figure out the exact jump they did from 1.5 to XL and then make your own XL with anime right off the rip will be a lot faster and more efficient than trying to struggle with XL's cucked architecture trying to build an anime finetune. Furries are testing a 1088x1088 version of the model but not sure what the base or parameters are.
Is there any way to get the download buttons back in A111? I use a collab and suddenly my download buttons were replaced with this I would really like the Download as zip button back so I can save batches I like and never see the rest again
>>20898 hover over them
>>20899 On a collab, it saves the zip to a dedicated folder (In the collab storage). I was hoping there was a way to get the old button that downloads the zip (to my PC) because once google says times up then the colab is completely wiped and all those zips are gone.
>>20900 Oh nevermind, it seems that the tooltip is wrong and/or this was fixed. It says it'll save to inside the colab but it gives me the option to just download to my PC instead so I was being stupid. My bad.
any artist suggestions for style loras?
>>20902 Some 2hu ones in no particular order that mite b cool: kamukamu, piyokichi, kasuya baian, formicid, huxiao
If I want to gen two character loras in one picture, should I use regional prompting or just gen two generic girls and impaint the loras into the generics? Which method is better?
>>20902 Tanabe kyou?
>>20877 Funnily enough I can't reproduce this, but because step 3 throws this: >RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) This is with only built-in extensions enabled too
>>20906 Tried with all ext disabled (including built-ins) and I could reproduce it, that's even weirder
>>20905 Not mine but there's already a Tanabe Kyou lora. Could be re-done better or maybe has been already but I haven't been looking https://litter.catbox.moe/16knlu.safetensors
>>20884 >venv >VENV >hurr durr node interface The state of comfy shills when they don't even know what the fuck they're talking about
>>20904 I prefer Regional prompter in latent mode+open pose. Inpainting lora characters in after high res fix is not doable or very time consuming unless the general shape/resemblance is already there. Inpainting lora characters then highres fixing is easier but still time consuming and will fuck your inpaints, you'll have to do extra inpaint fixes after. Reg.Prompt latent+openpose just gives you the whole thing without requiring manual intervention.
>>20910 How exactly do you make regional prompter with loras? I learned how to use it with vanilla prompts but if I add a lora to the mix then it shits the bed
>>20911 Like this >>20469 and make sure you're in latent mode
>>20912 Whats the difference between latent and attention? I finally managed to get something decent on attention but if I switch to latent then i'm back to abstract art abominations
>>20913 Put simply, attention basically just changes the regions that are affected by the prompt. Latent is slower but can also include loras. It's probably where you have the loras in your prompt which is causing issues. If you can't get it to work post an example catbox
to get poses and gens as clean as this do I have to learn how to use controlnet or something?
>>20915 Nah, that's probably standing, arms behind back and head out of frame.
>>20849 KOROSEXO >>20895 Oh hey, thought you left like B64 anon
>>20919 nyehehehe hey lois remember that one time i turned into a dragon child
>>20921 better, thanks
>>20902 Nakajima Yuka Sky freedom Youkan Atahuta
>>20902 (fujimura) carlo, om (nk2007), tatata, haga yui
>>20890 found out there is no "undo" I mistakenly clicked arrange float left and it pozzed all the cable management I had please send help
>>20890 what are you trying this for?
>>20928 exclusively to make futa using multiple models, since the cock detection model I trained works well but I can't do what I want using adetailer the node shit is good if the thing you want is slightly out of range of a1111, but holy fuck is the UI bad you can clearly see what cumrag did and where he just import solution'd his way out of coding, it looks like beginner intern tier code
>>20929 Why not try the new Invoke's nodes? Why doesn't anyone use Invoke? Just the lack of autocomplete and regional prompting?
>>20930 Can't be bothered learning yet another UI also lmao @ picrel
>>20931 >Can't be bothered learning yet another UI Oh fucking please, it's the same exact shit when it comes to the nodes and it's VERY similar to nu-NAI on the foolproof workflow front. Stop acting like switching between SD UIs is the same as going from PS to g*mp, it's pathetic.
(2.03 MB 960x1440 catbox_ydoglo.png)

(1.82 MB 960x1440 catbox_rnhp4f.png)

(1.46 MB 960x1440 catbox_0iewwu.png)

(1.41 MB 960x1440 catbox_337eqc.png)

>>20917 I've been pretty burnt out and SDXL being a letdown killed my drive. Thinking of getting back in the game and baking some LoRAs (especially ones I had unfinished datasets for), at least for a little while.
>invoke shilling after cumrag shilling
>>20934 >lol c*mrag is so bad >have you tried the only node-based alternative that isn't just a renamed cumrag? why doesn't anyone use it? >lol shill Nigger.
>>20933 Plenty of drive left here, just not literally. I'm struggling to come up with more excuses to postpone building the new rig now that Starfield dropped.
invoke deez nuts
this webui orthodoxy really isnt helping anything. schizo is now acting up if someone dares to mention invokeai, you okay buddy? cumrag hate is understandable because he's a faggot but that still shouldn't prevent (You) from using it if it's good for your purposes
stop posting UI shit that isn't A1111
click and drag on slider is so cancerous "why the fuck does nothing work" then notice picrel
>>20940 >slider meant ticker
>>20940 >cumrag nigger hates cumrag but keeps using it because he's too nigger-brained to switch to an UI with an identical workflow but that actually works >m-muh learning curve At this point you're just stealth-shilling cumrag and acting like a dumb nigger whenever someone points it out.
invoke my cock in your mouth
I hate schizophrenics so fucking much
>>20943 Buy an ad.
How much did Voldy pay for an ad?
what is even the point of this weird shill attempt when we are a small regular group? Can you go to /g/ or something and fight there?
There is no shill attempt. It's just you and your mental illness that starts triggering once a different UI is mentioned at all. Cumrags brigading cuckchan /h/ was frustrating because they were obvious shills. You and the other retard start blowing up once someone mentions invokeai or dares trying cumrag out for his specific purpose. fucking insane.
>>20947 It's just a schizo we picked up along the way he shills cumrag to get attention when cumrag is dissed with proof, he can't just go "that actually doesn't happen" so he starts shilling something else
>>20948 This is true You people flip the fuck out whenever something that triggers your /g/ PTSD but if someone posts normally then no replies at all
>>20949 >so he starts shilling something else Sorry, I'm not the shill and I'm not part of his schizo tulpa either. I'm just genuinely curious about why the cumrag shill doesn't try out Invoke when the workflow is the same and when it doesn't seem as hacked together as cumrag is. I also asked why nobody uses Invoke even though it seems as polished as nu-NAI with the benefit of a node system for the truly autistic proompters. If asking a simple "hey why doesn't anyone use this? what's wrong with it?" equals shilling to you then you need to detox from /g/. >muh learning another UI isn't a valid excuse when it's literally the same workflow and when you're SUPPOSEDLY autistic enough to even use nodes to begin with. I hate niggers and I hate nigger-brained people who make nigger-brained excuses just as much.
>>20951 based Total Nigger Death Total Cumrag Death
haha based my fellow national socialists kill niggers bottom text
>>20951 >the cumrag shill doesn't try out Invoke I am not the cumrag shill, otherwise I wouldn't be bitching about it with literally zero praise other than "the node shit is good if the thing you want is slightly out of range of a1111", which isn't specific to cumrag. And the cumrag schizo did seethe at me. >I'm just genuinely curious I answered you and you instantly seethed back with >>20932 which is not "genuinely curious" behavior if you're not a redditor. I just didn't want to spend any more time dodging retarded design concepts, especially because I won't be using that UI for anything other than this specific workflow.
>>20954 >I just didn't want to spend any more time dodging retarded design concepts, especially because I won't be using that UI for anything other than this specific workflow. >n-no, you! leave me alone i just want to complain Negroids shouldn't be allowed to use a computer and you're the proof. >I've set up SD, found the 8chan link and even managed to learn c*mrag but OH, WOE ME, I just can't be bothered to do the same exact shit I'm doing right now in another software that looks exactly the same. You dumb fucking nigger, I bet you're the kind of shit-for-brains who loves to complain about g*mp all day but still refuses to install PS. If you love the sound of your voice so much you should try taking your head out of your ass, maybe you'll be able to hear yourself better once you flush all the shit out of your ears and brain.
Just ignore the comfy shill
>>20955 You have severe anger issues. >>20956 The only involvement of the comfy shill is >>20884 otherwise it's just local schizo infighting
>>20957 >You have severe anger issues. Wrong, my anger issues have anger issues. And I love it.
>>20957 Shoebill faggot is absolutely deranged, that has been pretty obvious for a long time.
>>20959 I've never interacted with him before so this is news to me. For a moment I though my inner schizo awakened.
>>20959 >I've never interacted with him before t. /g/ newfag We get it. >>20959 Enchanté.
>>20961 interacted != seen I've been here for like 5 months If I trust the timestamp on my loras
>>20959 how many archives did you skim to try and fake being a regular?
delete all low-quality posts. hold zero regard for personality or post history, kill without restraint.
>>20964 I mean in that context it's useful to know it's a homegrown schizo and not an imported one
(2.16 MB 960x1440 catbox_6e98ge.png)

(1.90 MB 960x1440 catbox_xlgafn.png)

(1.70 MB 960x1440 catbox_oked53.png)

(1.96 MB 960x1440 catbox_2koeha.png)

shill this, schizo that; how about you fucks post some loli pussy like the board was intended for
>>20966 This anon has the right idea I don't want to hear how some idiot in /g/ mindbroke you so hard that you got PTSD now Post some lolipussy instead
(1.92 MB 1280x1600 catbox_gz0lik.png)

Yes more loli please
Is it possible to use regional prompting or tiled difussion to draw weird things like a loli with big breasts?
>>20970 why would you need regional prompter for that? >oppai loli, large breasts and then tinker with >mature female, curvy, plump, huge breasts, wide hips, thick thighs as needed in your negative.
>>20970 while other anon's answer is optimal, you can totally do this and you can also just do it with inpainting (of which is the purpose), if you want to give a normal loli breasts just slap them into img2img and inpaint the autocaption with big breasts added
(2.13 MB 960x1320 catbox_zxef2u.png)

(2.39 MB 960x1320 catbox_babh28.png)

(2.03 MB 960x1320 catbox_vgbs2e.png)

(1.97 MB 960x1320 catbox_v7zb8o.png)

>>20969 a challenger approaches
(1.79 MB 960x1320 catbox_6uogyi.png)

(1.76 MB 960x1320 catbox_u8rh9d.png)

(1.95 MB 960x1320 catbox_pdo6ws.png)

(1.58 MB 960x1320 catbox_l5ztsr.png)

>>20975 jazzed up, and get one free when you buy 2
>>20971 nice ones. when I prompt 'covering breasts' I get nothing >>20975 peak loli fox, very nice
>>20969 >loli fox revival time I concur
>>20976 loincloth ties it all together >>20978 cute smug fox
>>20980 >with regional prompter share your secret
(1.41 MB 960x1320 catbox_9rsodc.png)

(1.55 MB 960x1320 catbox_ed7rtg.png)

(1.91 MB 960x1320 catbox_tkauph.png)

(1.62 MB 960x1320 catbox_lrhrca.png)

extra chunky
(1.56 MB 1536x1024 catbox_jlb8u7.png)

>>20981 no secret, I just proompt
Finally took the time to organize a to-do lora list. Not sure the best way to share it or even if anyone would care to see the whole thing but I plan on keeping it updated anyway
>>20985 I see a few loras i'm interested in, can I bother you for a link?
>>20986 I mean it's a to-do list so I still have to make them but I can try and get to those first. This is probably the best way to share it for now https://litter.catbox.moe/1sr0dd.xhtml
>>20987 Oh I thought it was like a list of stuff you made The Egami, Gemba and donguri looked interesting since I like those artists
>>20987 Quick update to fix readability and formatting https://litter.catbox.moe/sklg5w.xhtml >>20988 I'll put those next on the list then. The actual list of stuff I've made is here https://rentry.org/zp7g6
Has anyone done any tattoo related loras? Like Kokona's stamp, any brand-related stuff or tally marks?
>>20990 it makes more sense to inpaint these
>>20902 Inuboshi, Teri Terio, Nora Shinji, St.Germain
Looks like they took down the Chinese site that was an exact copy of Civitai minus the nofunallowed part What a shame, I took to long to download the Poppy from Pokemon model before Civitai took it down
(991.25 KB 2304x1664 image(3).jpg)

(702.10 KB 2304x1664 image.jpg)

(730.08 KB 2304x1664 image(2).jpg)

(869.26 KB 2304x1664 image(1).jpg)

>https://mega.nz/folder/ZvA21I7L#ZZzU42rdAyWFOWQ_O94JaQ/folder/0io1WQTa Baked booba detection model to inpaint covered nipples. Works on all chest sizes, weaker on flats due to dataset bias. Here you go, anons. Place in \models\adetailer to use. Covered nipples lora in separate folder. >>20993 That's unfortunate. Saw many people looking for deleted civitai models recently on 4ch.
>>20994 neat. could you point me in the right direction on how you went about making the model?
>>20996 lol not nearly enough inpaint padding moment
Finally training my finetune with some of my planned improvements. Found out that if I wanted to train 768x resolution I would have to lower my batch size on training or I would CUDA Out of memory. On 512x I was able to push the batch size a little further would cuda memory errors but I noticed I hit diminishing returns and training would take longer after hitting 20 batch and not even sure if moving it that little would do much for the extra training hours. I will experiment these details when I get the TPUs rolling. Only reason I am still doing a 4090 training is solely to have an up to date control variable of a model to compare against. I was told by Lodestone to do a finetune test with the Jax script to get used to process before moving forward but I’m just gonna do a small chunk rather than the 200k images I’m currently using so I’m expecting by the weekend I’ll get activated, get told how to shard all 5 v3 TPUs, and start training these models much faster and with other parameters. I did see that Vpred can be trained on 1.5 out of the box so I can do a vpred version of the model, they were also testing some of the samplers to see about skipping the need to use the CFG rescale assuming I read their shit correctly. I think of all the parameters that were being tested that were introduced in 2.0, the issue is with NoPe. I saw a comment that suggested NoPe was lobotomized so not sure if training with that will continue. They are also having issues finetuning XL in general but not sure if they are running other sort of parameters that WDXL didn’t use or if it’s some sort of limitation of the TPUs. If I feel like caring I will dig a bit more on the XL end. My training model will finish training Tuesday afternoon burger time and I will post some results then.
>>20995 https://civitai.com/articles/1224/training-a-custom-adetailer-model This article should cover the important steps, it's pretty well written. Feel free to ask if there's something you're confused by, I want to save people some time from doing stupid shit like I did during the process.
Should these be the same for locons?
>>21000 typically i set conv layers to half of network dim and both alphas at 1
>>21000 Also, I recommend 16 network 8 conv for locons, anything higher is typically extreme overkill unless you know what you're doing. If your results are underwhelming, adjust the learning rate by a factor of 0.5x or 2x (or an entire exponent of 10 if it seems to barely work). 0 repeats, balance epochs and batch size to get between 1000-2000 steps while keeping batch size <=4. These settings produce consistently good results for me.
(729.00 KB 640x960 00142-0.png)

(757.30 KB 640x960 00120-0.png)

(2.01 MB 960x1440 catbox_z3mgti.png)

baking something but the first pass came out undercooked so it might be a while...
(1.88 MB 1280x1600 catbox_2g2325.png)

(1.77 MB 1280x1600 catbox_0cq6mn.png)

(1.79 MB 1280x1600 catbox_v8nr10.png)

>>21008 catbox #1 please?
>>21008 do you have a link to that one?
(1.69 MB 960x1200 catbox_dvr45y.png)

(1.66 MB 1152x1320 catbox_7fkd1q.png)

(2.32 MB 1152x1320 catbox_fc27xm.png)

(1.91 MB 960x1440 catbox_25evmh.png)

is there a magic trick to the multidiffusion controlnet tile upscaling shit that i'm missing left: multidiffusion tile upscale, 0.5 denoise 112 tile width 160 tile height 56 tile overlap 4x foolhardy remacri noise inversion enabled 5 steps 20 retouch 1.5 strength 210 kernel size tiled vae enabled 4096 encoder tile size 512 decoder tile size color fix enabled controlnet tile enabled 1.0 strength start 0 end 1 downsampling 1, every setting fiddled with to little result, eight and a half minutes right: default sd upscale script, 0.4 denoise, 1x3 tile, two and a half minutes i'm going to bed feel free to ignore this
(1.33 MB 1152x1320 catbox_ehyirp.png)

(1.55 MB 1152x1320 catbox_wv8orl.png)

(1.53 MB 960x1320 catbox_twuf2t.png)

(1009.22 KB 768x1152 catbox_uxu4us.png)

>>21014 I've been going back and playing with the "failed" versions of this locon, some of the early tests captured the artist's faces better but were a bit messy. Less conventional styles with a lot of scrawl are a pain in the ass to train, it doesn't help that I can't take many liberties with the dataset.
Somethings fucked with regional prompter for me now, if I do 3 regions or more everything after the 2nd one gets fucked. This problem even appears when I gen old regional prompter gens again.
>>21002 >0 repeats Are you sure?
>>21020 Yes, increasing the epoch count is effectively the same as using repeats except: 1. You're mathematically less likely to show the AI the same training image twice in a row 2. There's cleanup operations that are performed every epoch (I think scheduler cycles are also affected by epoch count but not 100% sure)
>>21021 that's usually called "1 repeat"
>>21022 i'm going to strangle you
(1.54 MB 1024x1536 catbox_h8xvik.png)

(1.70 MB 1024x1536 catbox_5a6kkp.png)

>!!!!!!!!! PLEASE DON'T POST LEWD IMAGES IN GALLERY, THIS IS A LORA FOR KIDS ILLUSTRATION !!!!!!!!! lol
(1.52 MB 1024x1536 00235-1654498039.png)

(1.91 MB 1024x1536 00054-305289223.png)

(1.75 MB 1024x1536 00101-801104086.png)

(1.63 MB 1024x1536 00015-3079996877.png)

(1.68 MB 1024x1536 00003-853181264.png)

(1.64 MB 1024x1536 00128-3536201268.png)

(1.82 MB 1024x1536 00087-2921267301.png)

(1.94 MB 1024x1536 00253-2419640530.png)

(1.44 MB 3648x3590 catbox_8eo8qi.jpg)

>>21025 Nice, very close to my locon but yours seems to do armor a bit better, did you crop out some of his more complicated fantasy illustrations?
>>21027 nah didn't do much. just added new images to the dataset. in fact, that particular bake had some watermarks and duplicate images made it pass me, but when i fixed the dataset and retrained all the results came out worse than the 1020it one so i settled with it. blackbox amirite
>>21028 For me his style was really temperamental to train, I think the wide range of face/body types actually serves to confuse the AI especially if they aren't adequately tagged. blackbox do be real though, I'm training another style right now and it feels like total RNG whether the result is fried or undercooked
>>21010 thanks!
>>21015 I prefer the one on the right, but I think you went too high on the denoise on the left and it invented too many distracting details. some parts of the left are nicer. I usually do 0.2 denoise, with euler a sampler, 20 steps. the readme claims this: >Recommend Parameters for Efficient Upscaling. >Sampler = Euler a, steps = 20, denoise = 0.35, method = Mixture of Diffusers, Latent tile height & width = 128, overlap = 16, tile batch size = 8 (reduce tile batch size if see CUDA out of memory). I usually don't change from the default tile overlap though. Don't do noise inversion unless you really want to preserve specific details. Also, I don't think you need tiled vae at that resolution. I don't use it, not sure if that slows it down (I suspect it does).
20 more minutes and this shit will finally be baked
>>21032 anon turned into bread
ayyyyyyy
I'm liking the results of this training, its a shame I couldn't train at 768x768. I'll do a training on that resolution with the TPUs.
(665.32 KB 640x960 00039-0.png)

(799.93 KB 640x800 00052-2331067412.png)

Damn very long tongue doesn't work. But I'll take it because cute
how do i go best testing how well a lora has been baked? this is my firs time testing a concept lora (trying to do side-view missionary) and can't pinpoint any outstanding flaws of a particular epoch. i even rebaked with a lower learning rate just to be sure. can i get some feedback?
>>20906 ran into a similar issue I swapped models around a bit with a ram cache of 2, deactivated ram cache and switched models around a bit again got >NotImplementedError: Cannot copy out of meta tensor; no data! The full error ( https://pastebin.com/9qHGsKXP ) mentions the clip blend script, though it's probably because it's part of the gen process
>>21038 Yeah, I got this too when messing with the RAM cache. This one makes more sense, and voldy had mentioned the implementation is inherently flaky with how he's doing it, but afaik it only ever occurs when adjusting the RAM cache and not restarting. Unfortunately voldy also fucked off after the update came out lol, so I dunno when I can bring it up to him.
>cumrag uses fp32 vae by default >switches to tiled vae if OOM >need to find cli flag to force fp16 VAE by yourself because no wiki That explains why the color balance and contrast is fucked up. Sorry but I use a real VAE and not a pozzed one. I've never seen a black image on auto and never will, but my VRAM and time is precious.
(1.39 MB 960x1256 catbox_ddl4dy.png)

(1.27 MB 960x1256 catbox_uhukdh.png)

(1.54 MB 960x1256 catbox_ugg0xf.png)

(1.45 MB 960x1256 catbox_ptqxdz.png)

>>21014 Okay, I added a second version of this LoCon which is trained with higher rates and more data, though it might be a bit harder to wrangle than the first one. Added it to the MEGA.
>>21037 Half the gens in your X/Y plot have deformed arms or body, is it model or lora induced?
(1.13 MB 1024x1536 catbox_1r80w7.png)

(1.30 MB 1024x1536 catbox_eplzgv.png)

>add_detail lora >controlnet tile >extra noise I have no fucking idea what I'm doing anymore
>0b free out of 232gb >arknights x monster hunter collab is tomorrow >seriously out of shit to delete this time >all the other drives have 900mb of free space available >need at least 10gb to update because android faggottry + vm faggottry on top Starfield and stable diffusion didn't push me to build the new rig but Arknights will.
>>21045 how long have you been sitting on all those PC parts?
>>21046 with everything ready to go? exactly 6 months... My newest excuse is that I don't trust my stock of GC-Extreme because it's been sitting there for like 3 years and half (even though it's all sealed but it's very hot and humid here) so I ordered a 10g syringe and that's coming in tomorrow. I'm not gonna waste the few sealed 1g bags I have but it was good enough as an excuse. :) Had this been a client's PC I'd have built and tested it within 2 days max but I just can't find the motivation to do shit whenever I have to do anything for myself
>>21047 I'm going to hold you accountable to build your PC tomorrow and remind you until you finish kek >>21048 very nice, been wanting to do something similar to make a small concept VM
>>21049 thanks. my enthusiasm starts high and then erodes whenever I try one of these inpaint heavy gens. I have developed some gimp muscle memory, not sure if that's a good thing.
>>21049 >I'm going to hold you accountable to build your PC tomorrow and remind you until you finish kek I'll probably be too tired tomorrow to be brutally honest the real reason why I haven't built it yet is that I don't have the space to build it and the case I bought is too fucking heavy to lift up on the tiny kitchen table we have I should buy one of those adjustable desks and build the pc on it, I need to replace the tiny garden table I keep my synth rack and laptop on anyway
>>21050 just use PS or affinity, seriously gimp makes me want to deepthroat a shotgun
>>21051 build it on your floor butt naked, so long as it isn't carpet kek
>>21053 I would but I have a bad back and a dog lol
>>21054 fuck sakes bro stop making excuses kek
(1.71 MB 1280x1600 catbox_xmxqr9.png)

>>21055 nah not excuses, why do you think i can't lift the case up lol
>>21057 I'm just busting your balls man, but seriously, get your shit set up asap, I'm an anon of my word
>>21058 Yeah I'll try to get it done by the end of the week for real this time I probably shouldn't have bought a 7000D lol
>>21040 wew that third one
(907.43 KB 640x960 catbox_9j8q42.png)

We out here reviving dead characters
>>21062 that's not mima
is there a decent rape miko lora that doesn't mangle her accessories and ears whenever you prompt side/back views or whenever you use a style lora?
>>21064 sorry i meant yae miko
(1.12 MB 1024x1536 00238-3111231716.png)

(1.19 MB 1024x1536 00214-1968415746.png)

(1.44 MB 1024x1536 00031-859897696.png)

(1.58 MB 1024x1536 00016-1762578840.png)

(1.61 MB 1024x1536 00408-1858865408.png)

(1.63 MB 1024x1536 00032-90336449.png)

(1.54 MB 1024x1536 00102-2012325409.png)

(1.50 MB 1024x1536 00046-2478040621.png)

>>21066 HOLY BASED
Anyone successfully made a style LORA with DAdapt? It just seems worse than other methods. A couple of the guides I've seen -- https://rentry.org/59xed3 & https://rentry.org/LazyDAdaptationGuide -- say it's the best, but I'm just not seeing it. For much smaller datasets, like for characters, maybe it's better, and that's what they're basing it on? Or am I doing something wrong? I've tried many different settings, including the ones various guides suggest, and they always come out fried to the absolute max. TOILETS. Any suggestions, anons?
(1.53 MB 1024x1536 catbox_cxgayb.png)

(1.89 MB 1024x1536 catbox_diway2.png)

>>21070 One of my style lora (custom udon in gayshit) was trained with DAdaptAdam(decouple=True,weight_decay=0.6), lr=1.0, unet_lr=1.0, tenc_lr=0.5 Back then there's rumor that you have to have dim == alpha for dadapt optimizers but I'm not sure if it was true or still applies to DAdapt V3
>>21071 Haven't tried tenc lr at 0.5, or matching the dim and alpha. I'll give that a try for the next version.
>>21070 >I've tried many different settings, including the ones various guides suggest, and they always come out fried to the absolute max. TOILETS every dadapt fail bake was underfit for me, interesting. I think it is good for style datasets of 50-100 images but you have more adam is better imo
>>21073 What are the settings you use for non-dadapt?
Where were you when the no-no-absolutely-not model was dropped on /sdg/ and /hdg/? I was eating spaghetti while binging Initial D for the 5th time when the thread updated "glowniggerkikes have release a jewkrainian cp-trained model" "no" https://arstechnica.com/information-technology/2023/09/ai-generated-child-sex-imagery-has-every-us-attorney-general-calling-for-action/
Currently, my only gripe with my model right now is that the shit only works if I merge with existing models right off the rip. Otherwise the training adjustments I made did bear fruit, but I would like the model to be able to stand on its own as well and give the choice for merging with othershit to experiment. I think I did all the testing I wanted to do for now, need to start talking with the furries more seriously now and see if any of them are kind of enough to walk me through running the script and get some more info on what would get me the best results to improve base model when used as a standalone model. >Both images are same prompt and everything, working one is the merge, schizo bloodborne vision is base
>>21075 >ars technica didn't this outlet have one of their journalists arrested for CP and caught attempting to coerce a child to come home with him in a sting operation?
(80.30 KB 788x962 #Antis - Real Pedo Caught.jpg)

>>21077 I don't know, I don't follow e-tabloids. Probably? Lemme look it u-- OH IT'S THIS GUY LMFAO
>>21078 Would you look at that lmao
>>21077 >>21079 Apparently they let him go shortly before his downfall and some staff guy said something very interesting
>>21080 >please don't bother his associates lmao In regards to the original article, their comments section is basically laughing at the author. And pearl clutching hot takes are getting ratioed.
>>21081 Yeah, it's great. Anyone with half a brain cell can tell that this isn't about protecting kids whatsoever and it's just more glowniggerkike entrapment shit.
>>21074 sorry, phoneposting now but I pretty much copy adam settings from lazylora anon >>21067 and sometimes adjust dim them a bit (also trying lower dim because 144mb loras are taking a toll on my ssd already). I haven't baked pretty much for a month though
>>21078 >drpizza >arrested for cp
>>21081 Old tech communities tend to be quite aware of four horsemen shenanigans.
>>21075 These tech-illiterate boomers don't know the genie is out of the bottle, good luck getting it back in there, best you can do is keep faggots from approaching the genie lamp. All they can do is stick it in the Cave of Wonders now.
(1.86 MB 1024x1536 catbox_gmk1j3.png)

(1.73 MB 1024x1536 catbox_llrhbg.png)

Testing new jp checkpoints
>>21070 I don't know if kohya's fixed it but the issue with d-adapt used to be that it only read the LR from the text encoder setting. I had success when I was setting unet to 1 and text to 0.5 while other people who set base LR to 1 were getting fried shit (since unet and txt will read from base if not set). Presumably 1.0 LR is actually overkill and the sweet spot is somewhere around 0.5-0.8 for the same hyperparameters you would use with adamw8bit.
>>21089 worth?
(1.93 MB 1280x1600 00150-1522303122.png)

(2.05 MB 1280x1600 00146-2644063009.png)

>>21090 catbox please
(2.08 MB 1280x1600 catbox_wivv4x.png)

(2.44 MB 1280x1600 catbox_hkvdyn.png)

(1.83 MB 1280x1600 catbox_nr7jed.png)

Do a flip
>>21096 i really love this style, it's great the AI picks up on it so well
(1.35 MB 1280x768 catbox_jaf0k8.png)

(2.14 MB 1280x1600 catbox_94o02r.png)

>>21097 I've always liked the style of backgrounds using solid brush strokes and this style does that really well with a more boxy look. That and the hair highlights
Does the guy that makes the based mixes come around here? If so I'd like to request a version of Bassed66-V3 without HLL4 Alternatively a way to subtract it myself
>>21099 He hasn't been here in a long minute
>>21099 If you go a thread or two back I think someone posted the recipe for Based64. Might be a good starting point.
>>21094 https://litter.catbox.moe/3qc1yf.png normal catbox isn't working for me right now
>>21101 Not them but BasedMix Anon started experimenting with SuperMerger with 65 and 66 so it went a different direction completely from the Simple recipe. Which I think it was AOM2_Hard + (HLL3.1-NAI Full) @AD1 then that result + Defmix Red @ws0.5.
>>21103 based65 wasn't even that bad, don't know why people shat on it. but b66 I kinda disliked
(7.99 MB 4480x1626 file.png)

>>21092 Nah not really Most new merges from 5ch are way too stylized to the point that they are practically character finetunes
>>21044 What did you do there? I like the first one, but it's missing metadata. I guess you used this lora for it, but with negative values? https://civitai.com/models/58390/detail-tweaker-lora-lora I'm kinda looking for ways to make images look less detailed and more colorful. I like how 7th_anime_v3_C looks, but it makes pussies look like garbage.
>>21106 It's the other way around: the left one is 7th anime v3 with kasshokuhada lora, the right one is img2img with based64 and controlnet tile
(1.54 MB 1024x1536 catbox_lx9ccj.png)

(1.54 MB 1024x1536 catbox_ssgfq0.png)

>>21106 >I like how 7th_anime_v3_C looks, but it makes pussies look like garbage I've yet to find a flat color model which consistently produces better pussy than 7th anime, in the meantime I just inpaint with based64 and pretend it's good enough
(1.66 MB 1920x1079 HD-Player_P30pgMtiCQ.png)

I am patiently waiting for a decent LoRA.
>>21100 Guess I'll try opening a discussion on the hugginface page, if that's even actually his >>21104 I really like based66 except for HLL4 and all the problems that it brings which I've dealt with before in my own mixes. I really don't get why he included a model that hard changes how certain things look (almost all horns look like laplus') without putting certain vtubers in negatives.
>>20926 which Kanna Lora?
>>21110 He does have a huggingface page but it’s a matter of he is still active or if the negative reaction of the recent models made him stop working
>>21112 Well I'm forever gonna be thankful to the guy for making Based64.
>>21113 catbox?
(1.75 MB 1024x1536 catbox_mdlaks.png)

>>21099 basedanon (the name I call him by) got bullied off here by some troll, (you can check the archived threads here) guy even tried to shitpost on his based 66 civitai page prompting him to close the comments, so yeah he hasn't been heard or seen from in about a month now.
>>21116 People take breaks and get tired of this, it's normal even if he didn't get bullied for his newer mixes. >>21110 Are the horns your only problem? When I tried comparing models, b66 seemed to consistently fail more with prompts I used, especially with chocos, it really fucking hates them lol. Negging vtubers seems like a band aid to a deficient model tbh, but what negs do you suggest?
Pretty sure that HLL4 was causing lots of issues due to the way its training was changed from what was used in 3.1. Pretty sure HLL5 had even more changes from that as /vtai/ seems to like 5 a lot over 4 and then some people still crown 3.1 as the king.
>>20766 >yeah, this v1.6.0 seems more "accurate" in the gens? Nothing changed for me? My 1.5 stuff comes out the same in 1.6.
(2.15 MB 1280x1600 00100-2801111385.png)

(2.30 MB 1280x1600 00106-2801111385.png)

(2.26 MB 1280x1600 00114-2801111385.png)

>>21120 how much takorin is in that?
(1.85 MB 1280x1600 00086-1000679991.png)

(1.94 MB 1280x1600 00126-1026995780.png)

(680.31 KB 640x800 00343-3290906729.png)

>>21121 It's mostly takorin plus whatever is leftover from the character lora. Just playing around with styles that have sovl
>>21117 Yeah it's a bandaid and I hate it, hence why I want HLL4 out. I don't spend a lot of time comparing checkpoints, and mainly moved to based66 because it handles my style loras way better than others I've used. I had trouble with flat styles in 64, 65, AOM, and others like animelike generated garbage with my loras. The horns aren't the only problem but one of the most egregious for what I generate (I'm a massive hornfag). Another huge problem for me specifically is that when I generate with a lora I made of my waifu, the network quite often ends up attempting to give her a sort of half ponytail unless I add "hair down" to the prompt. This hairstyle literally does not exist in the dataset that I used to make the lora, and I have only ever encountered it on mixes that include a version of HLL. Another fun thing is typing in oni and getting half of Ayame added instantly. It's just quite a focused model and doesn't belong in a general mix. I don't do a lot of random genning to find more general problems with models, I almost exclusively generate specific characters using self-made loras because I'm autistic about character accuracy. Still, I've found enough problems with HLL to avoid using it unless I actually want to make use of what it has. >>21116 That's a shame
Been away from a while. Did the anon that trained on the ufotable artstyle post his model? If so, can I get a link
>>21116 >basedanon (the name I call him by) got bullied off here by some troll Hahahahahahahaha How The Fuck Is Cyber Bullying Real Hahahaha Nigga Just Walk Away From The Screen Like Nigga Close Your Eyes Haha
>>21124 he's posting here and no he didn't release the model >>21076
I wish someone made a civitai alternative because the owners keep brainstorming ways to make the site worse
>>21124 I'm still here, Im waiting for furries to help me transition over to working on Jax to get the TPUs going. I shared some scripts and images that I already posted here with along with some of the issues I'm currently having and just waiting for them to free up a little. I'm cleaning up more images in the mean time and just waiting to send google my project info to get activated.
>>21128 Wish you luck in your endeavors, and I look forward to the result.
(2.14 MB 1600x1280 00240-952445263.png)

Correction will have to wait until tomorrow
>>21123 sounds like hll4 is just overfit. I haven't even tested hll5 so wonder how it does compared to 3.1
Is there any dorontabi lora out in the wild? Didn't find any in gayshit, civitai or 4chan archive
If anyone here is on lower spec cards (20XX or older ideally) and wants to test out https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13162 please feel free. It's supposed to be a free performance improvement but the affect seems negligible on faster cards.
>>20902 If you're still taking suggestions, varniskarnis would be nice. Or dismassd as well, I find that the ones on gayshit leave a bit to be desired.
>>20902 Amedama Akihito/Old School Academy
>>21021 Am I supposed to need 25+ epochs to get any not undertrained?
>>21133 I'm on a 1650 and I could try it I guess but I have no idea how to install that or test it out
Did google ban shit again? I'm using the nocrypt collab and it keeps shutting down randomly like i'm triggering their no-no word filter again
(2.66 MB 1280x1600 00311-4285356848.png)

>>21132 Not that I know of so I threw one together quickly https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/6NUmVBwY
(2.80 MB 1280x1600 00349-3453317486.png)

So how do you prompt loli? Is there a lora for it?
>>21141 write loli then press generate?
>>21138 I don't use Colab or any of that, but checking the 2hu AI server it's not only you. I guess cumfyui also got picked up in their filter. nocrypt said they would look into it tomorrow.
>>21143 >Cumfy and Nocrypt got picked up this time Probably cumfy's fault since he's been doing some aggressive advertising and shilling
How does one set up a collab for others? How does it work?
>>21145 The big-time colabs normally make a script that runs SD on google servers then spits out a gradio/cloudflare link See https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb https://colab.research.google.com/drive/1wEa-tS10h4LlDykd87TF5zzpXIIQoCmq If you mean setting up a colab so a friend can gen using YOUR pc resources then just set up gradio/cloudflare/whatever and give your trusted friend that link so they can use your spare PC resources when you're not using them.
>>21146 >my resources hell no lol I was just curious how people use Google's resources to make Collabs that others use.
>>21144 nocrypt is saying it's server-side so everyone might be fucked unless they figure out how they're detecting it, lol
I'm more surprised ComfyUI got hit honestly With how hated cumfy is and how hard his UI is to use, I am honestly surprised his shit got hit and banned from colab.
good thing I prompted all the shit I wanted already. can't get to my pc for a few months and I'm probably not gonna torture my laptop with this. owari da
>>21150 ComfyUI apparently still works and will be up again after some tweaks/workarounds but you'll have to come to terms with getting cummed inside by Cumfy
>>21151 I'd cope with using comfy if I really wanted to, but I was feeling kinda burned out anyway. And looks like google is actually trying to get rid of free colab users this time instead of using half-assed filters, but we'll see
>>21153 I'm pretty sure it's still a filter, like 99% sure. The problem is that colab makers like Nocrypt are just running away to comfyUI rather than trying to figure out how to dodge the filter Also, if you do dodge the filter. Maybe keep it hidden for a select few instead of flaunting it to the world in the main A111 repo???
(722.38 KB 640x800 00120-494715533.png)

Holy fuck why is a black hoodie with purple sleeves the hardest thing to prompt
>>21154 >Also, if you do dodge the filter. Maybe keep it hidden for a select few instead of flaunting it to the world in the main A111 repo??? Well maybe there are some who dodge the filter and keep it to the selected few lol
>>21156 Yeah, just saying since it really bothers me. The golden rule of violating TOS is to keep it under wraps instead of flaunting it. It's like those FFXIV guys that bought a billboard and showcased themselves violating TOS.
>>21154 >The problem is that colab makers like Nocrypt are just running away to comfyUI rather than trying to figure out how to dodge the filter everyone's saying that they're going after cumrag too but I just tested and it seems to work without kicking out from colab. but colab welfare is probably is coming to an end unless you are capable of developing a notebook for yourself or your limited clique
>>21155 separating colors like this without inpainting is a huge gacha
Freed up some hard drive space so I can begin adding KnK 5, 6, and redoing 9 and 9.5 to the model dataset. This will be the side quest mission this month to get the raw images with autotags plugged in (manual tagging later) while getting TPU ready will be the priority. Doing my damnedest to get something passable ready to go.
>>21154 >I'm pretty sure it's still a filter, like 99% sure. nocrypt says that it's server-sided code and I'm pretty sure he knows this shit rather well
>>21160 Looks very clean! You mentioned that the bare finetunes don't come out good enough themselves for now though, what are you mixing it with?
>>21160 What are you doing again?
>>21163 1024 dim madoka magica lora
>>21161 there's still a way but the nocrypt guys are tweeting directly to google https://twitter.com/missneggles/status/1700872089170006462 so it probably wont be long until google shuts down the final workaround and all poorfags like me are forced to use Cumfy's awful shit since Google might be specifically targeting Nocrypt and all the other popular colab's shilled on A111's repo. Like I said, the sole way to keep colab alive is to keep it to the select few rather than to make it public for clout and shut it down. Nocrypt posted a picture of some dude running like 30 instance of the free colab and that's exactly why google is killing it for the average free user.
>>21166 we'll see how it goes I guess but yeah unless you are a member of some clique, a1111 colabs are done for. maybe google is gonna preventively shut down cumrags too while they're at it so no point in getting into it, for now at least and also I don't feel like nocrypt was particularly well-known. most people even on 4cucks seemed to think that colabs were dead since april
>>21168 All it takes is one (1) autistic fuck Like they posted this screenshot of some retard running 30 instances simultaneously so the best way to keep colab alive is to just make a small, obscure notebook and share it with a few friends because if it gets popular or some malicious actor decides to do some funny business (Comfy wanting to kill A111 for example) then it's over for poor people like me that just want to casually gen now and then.
>>21167 I need to stop using Anime6B, it just doesn't work right when LowRA is in the mix
>>21171 i feel that
>>20980 catbox?
>>21162 I followed the Based64 recipe but swapped HLL for my model. >>21163 Ufotable checkpoint finetune, take the studio art style and make it more flexible than a LoRA. Some concepts still need work due to bad data or lack of it from either my stuff or base NAI but eventually they will get fixed.
(2.17 MB 960x1440 catbox_imjfxg.png)

(2.04 MB 960x1440 catbox_3fy7sq.png)

>>21171 at least you're not me, who got his motivation to work on AI stuff back immediately before becoming ensnared in the latest toddslop release
>>21175 Is starfield actually good? I've only heard bad things about it
>>21176 It's godawful in many departments and full of mindboggling design choices, I'm constantly getting whiplash because one quest or game system will be well thought out and then something else will drag down the experience immensely. I will say the first 10-20 hours are mostly fine and if you're curious it's probably worth checking out through gamepass.
>>21177 >It's godawful in many departments and full of mindboggling design choices I can't play it yet, anything major that isn't the already tired and old loading screens/ship takeoff thing or the fact that planets are still divided into chunks and not really "open"?
>>21175 Yeah I'm never playing another Toddslopp game after Skyrim, no amount of mods can polish a turd like that. The game I'm currently replaying is Sekiro and after that I'll revisit a classic from the PS1.
How would I go about making an inpainting lora? Do I just train it like normal, with cropped images, and just call it in an inpainting pass? I'm fixing to make an autistically comprehensive lora for several different kinds of horns
>>21178 It's not Elite Dangerous or NMS, and the space game elements that are there generally feel stapled on (except shipbuilding and combat which is surprisingly good). If you temper your expectations and accept it's first and foremost a (bethesda) RPG you'll at least get some dumb fun out of it.
>>21176 Its good if you like Fallout 4 and want that experience in a sci-fi/outer space environment with spaceships and whatnot, less scrap collecting and more ore mining and alien killing for resources, its just that everything that's not a major area is procedural generation so keep that in mind if you wanna walk by randomly genned rock #938976345243 to mine ore for resources and kill some bad guys while you're at it I guess go for it. The main story's whatever youre the dragonborn but in space. I liked F4 mainly cause of the crafting system and the shooting, but this game will not be everyone's cup of tea for sure.
>>21182 p.s. the performance is dogshit especially on Nvidia you will probably not run it at 60 1080p unless you have a I dunno, say a 4070 and above.
>Joke about new Madoka trailer and how anon should make a fake key art poster to troll /a/ >new Madoka movie trailer dropped this morning Fucking kek I think I need to start brainstorming the shaft finetune after Ufotable so I can be ready for the BluRay release
>>21181 >It's not Elite Dangerous I've played little ED but from what I know and have heard about it that's probably a good thing >or NMS That I do know unfortunately, for a space game the planets don't look anywhere near as exotic as even the regular NMS biomes >If you temper your expectations and accept it's first and foremost a (bethesda) RPG Pretty much what I wanted honestly, I enjoy bethesda RPGs at their core but most of the time it's the setting that feels limiting (TES) or too repetitive (Fallout, I can only take so much rust and rubble. I don't mind it in 2D but in 3D it's weirdly fatiguing if you know what I mean) I'm assuming you meant ship combat is good, right? What about regular combat aside from the spongy enemies on the higher difficulties? >>21182 Fine by me honestly, I dislike FO4 as a Fallout game quite a lot but if you asked me whether I'd rather play that or Outer Worlds I'd immediately pick FO4. By the sound of things I'm probably gonna wish they had ripped off NMS way more than they seem to have but oh well.
You guys know NMS got new content recently right? Just play that instead of current year field
>>21186 Yeah. Unfortunately HG still won't give us ship and multitool customization but maybe the new two-handed staff is a testbed for modular weapons.
>>21185 >I'm assuming you meant ship combat is good, right? What about regular combat aside from the spongy enemies on the higher difficulties? Yeah, I like the ship combat. Regular combat is not great but not awful, it's 95% vs humans and wildlife is janky, enemies are spongy and have no object permanence but there's already mods to fix both of those if that's your thing.
>>21189 >but there's already mods to fix both of those if that's your thing. Sure, I don't mind it. I'm the kind of modlet who wants to play modded shit but never does outside of "essential" fixes just to avoid all the headaches, especially with bethesda games.
(1.69 MB 1280x1600 catbox_repsil.png)

>>21184 I'm stealing this idea for NGNL so that it's guaranteed we'll see a real season 2 trailer tomorrow
>>21184 >>21191 ..... I'm making a Kemono Friends finetune.
>>21192 All you're going to get from Kadokawa is a new Kemono friends gacha with NFTs and Blockchain
>>21193 >is a new Kemono friends gacha with NFTs and Blockchain I regret to inform you and to remind myself that Kemono Friends 3 is pretty much that.
>>21180 >inpainting >>>>>LoRA Wat
>Kizumonogatari Koyomi Vamp recut also announced >There was a new Monogatari volume released earlier in the year Shaft gravy train coming through!
Wasn't the post limit 1200? What happened?
>>21197 Good point, wonder what happened
>>21198 My guess is that site owner must have raised the global limit
Is anyone with an AMD GPU just shit outta luck?
>>21200 You have to join Linux gang
(2.44 MB 1280x1600 00010-1376157019.png)

(2.44 MB 1280x1600 00020-2735052858.png)

Bratty archon needs inpaint correction
I just found out that nips are selling their AI arts as kindle books on Amazon JP https://amzn.asia/d/1kOsUJf Also, JP reviews are fucking built different holy shit.
(1.63 MB 1280x1600 00032-520105926.png)

(1.47 MB 1024x1536 00013-3078006502.png)

(1.59 MB 1024x1536 00023-1893917267.png)

(1.57 MB 1024x1536 00108-2727936970.png)

(1.62 MB 1024x1536 00008-4121359591.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added lk149 >>21203 >*All the people in this photo book are illustrations generated by artificial intelligence (AI) and do not exist. >*All of the people in this photo album are at least 20 years old. kek
(718.16 KB 1024x1536 00006-790305222.png)

(1.16 MB 1024x1536 00008-1837806673.png)

(1.51 MB 1024x1536 00006-1202154496.png)

(1.45 MB 1024x1536 00296-1616998854.png)

(2.00 MB 1280x1600 00033-2576122252.png)

>>21168 It's truly over.
>>21168 Cumrag was already complain-gloating that CumUI collabs were getting shut down too but that it means Google is aware of his existence. Truly an insufferable fag.
>>21208 Guess it's a good thing that I generated stuff I wanted till the point of exhaustion while I had access to my PC and later colabs. feels like the doors are closing for cloud SD >>21209 Yeah I tried feeling sympathy for his mental disability (severe autism) but fuck he's so annoying and miserable.
>>21208 why? did they announce anything? are the filters unavoidable now?
>toy switched to cumrag lame Anyway, this looks interesting https://twitter.com/toyxyz3/status/1700799684388978721
>>21174 >holy shit it's a good looking motorcycle! >sidestand is down lol
>>21174 >I followed the Based64 recipe but swapped HLL for my model. and how does it look like without mixing?
>>21205 catbox for witch please?
>>21215 Not him but try the examples in the mega
>>21216 thanks, was skimming to catch up and didn't see the mega link!
>>21213 kek I did notice it after the fact and inpainted it out. >>21214 See >>21076 From what I understand its an issue with how the finetune data and the base model's weights are clashing.
>>21212 Well if Animation fag is making his tools for cumrag first, its not a surprise that he is using the UI for that purpose.
>>21220 The recent developments are impressive but I just don't care about them
anyone have links or experience with training ESRGAN models for upscaling? think i've got the main idea down but was wondering if someone's already gone through the trouble of developing best practices for stuff like dataset preparation
>>21222 Like an upscaler? I haven't heard of any anon trying to train one.
>>21210 >>21208 Colab is still alive as in you can still use stable-difussion but Nocrypt definitely got nailed along with all the other popular colabs The only way for colab to survive and go forward from here is to go underground and not share working colabs you find because it seems that what gets colabs shut down is concurrent users so if google sees a lot of users on a notebook then it will be most likely shut down. People in the nocrypt discord tried it with some obscure colab in a spanish discord and everyone swarmed to it because "Bro it works!!!!" and now it doesn't because it's not obscure anymore so if you find a workaround or make a workaround then keep it to yourself or only share it to a select trusted few.
>>21224 How does one set up a collab? I may try getting one up and running to demo my finetune.
>>21225 Using FREE google resources: Study the lastben one, figure out how to bypass the filter/block and don't share it to many people. If word gets out then google might shut it down and the associated account could be banned. Using PAID google resources: Clone one of the existing colabs and share the link or have it download the model as the default. Keep in mind google pro only buys you like 50~ hours of genning. Using Paperspace: 8$~ a month for unlimited gens and NSFW is allowed. Actually a sweet deal. Using your own resources: Start up your UI of choice and share the gradio/cloudflare link
>>21226 Didn't paperspace go on a nuking spree?
>>21224 >colab is still alive nevermind seems that the working colab I was using is dead now. Guess people figured out it was working and now it's dead like the rest so it's fucking over for me too. >>21227 People in nocrypt's discords have been shilling paperspace as the best deal right now and apparently they allow NSFW so it should be fine but they require mobile verification + your payment methods and I'm extremely reluctant to attach my usual gens to my real name and mobile phone so I can't test out paperspace myself and say if it works.
>>21222 >>21223 >https://upscale.wiki/wiki/Beginner_Training_Guide nvm i'm retarded i forgot the upscale wiki is a thing
Is there any way of encrypting the picture being generated if I want to use Colab/Paperspace/Whatever and am a paranoid fuck?
>>21230 >cloud solutions >privacy pick one
(1.93 MB 1024x1536 00002-1670035893.png)

(1.99 MB 1024x1536 00045-2459105074.png)

(2.35 MB 1024x1536 00008-841766573.png)

(2.26 MB 1024x1536 00086-3635164042.png)

(1.84 MB 1024x1536 00021-1105949257.png)

(1.87 MB 1024x1536 00022-3823994282.png)

(2.10 MB 1024x1536 00093-2698925423.png)

(2.12 MB 1024x1536 00113-2937136964.png)

>>21232 thank you anon
Been gone for a few months, any new methods of baking LORAs or interesting updates with models? Or is everything pretty much the same?
>Guy purchases paperspace pro and sends it to nocrypt so they can experiment >Gets almost banned from the server because he was anti-trans and decides to just leave This is definitely why we can't have nice things
>>21236 >try to do good by the "community" >have the "wrong opinion" let them rot then
>>21235 Things have kind of stagnated. Big things right now is people trying to get SDXL working due to sunk cost, and animation. Furry community are training 3 new base models using TPU grants from Google, with one of the models has its own finetune inpaint model included. They also figured out how to shard (train on multiple TPU/GPU) that normally can't SLI/NVLink. They are also attempting an XL finetune but they are getting very angry at it lol.
>>21238 ah was kind of hoping the Anime AI art side would have had huge breakthrough, saw the webui got updated with a bunch of new samplers, any of the new ones being used here a bunch?
WIP, there is a Suika LoRA already on civit but it's 128dim and fried, plus I wanted to add the twin buns from Lotus Eaters. No matter what I try the AI doesn't want to learn her chain baubles, though they would probably just be spaghetti even if it did.
I hate how sometimes based64 makes really great pics on its own and other times it makes everything too grainy and washed out with realistic textures even with the same parameters and prompt except a different location or position. And adding shit to the negative like film grain or scan to counter that doesn't seem to do anything. Seems like it's mostly a case for indoor scenes, while outdoor in nature scenes are more colorful and smoother.
>Colab is dead >Need to abuse my laptop 1650 if I want to gen ever again It's all over for me bros Go on without me...
Saw a bunch of new HLL models came out, wonder if they won't have issues like HLL4 did. Yet if Based anon ever comes back hope they use HLL3 again to avoid issues.
>>21243 Fuck you made me look. HLL6 isn't out yet so I'd wait and see, good to see HLL Anon working again. I see that some of the other /vtai/ folk are talking about fluffyrock and vpred.
>>21241 are you using (worst quality, low quality:1.4)? try lowering the weight of that to 1.2 or lower
>>21224 I can't find nocrypt's discord server, is it a secret club or I'm being a brainlet?
>>21244 >fluffyrock and vpred I saw the earlier model posted here need a .yaml, would a random merge with a regular model and a vpred model need a yaml made for it?
>>21218 catbox?
>>21245 Yeah, but I think I tried doing that, or just doing "worst quality, low quality, normal quality" without extra wights and it still looked bad. Not sure though, maybe I'll try again later, can't right now.
>>21247 You need to use a particular setting in SuperMerger to merge a vpred model with a non-vpred and you need to keep that yaml file on hand so it can make a corresponding one with the merge. I recall that the process to merge the model is like 40 minutes to an hour?
>>21250 alright I'll keep that in mind, I suppose so far it's only been the furries that have uploaded vpred models online?
Out of the loop here, what is vpred and why is it significant in a model?
Spoke with Lodestone (FluffyRock maker) for a good hour. There are still some issues with sharding 1.5. The TPUs have less VRAM than my 4090 on their own so its not worth activating my invite and trying to finetune on less capable hardware on a different coding environment (Jax) when I can just continue what I'm doing. He did offer me some suggestions so I will try them out on a later attempt after I chuck in the KnK stuff I just frame extracted today. >TLDR, Google gibs cant help me in the way I hoped, gonna buy kneepads for some A100s >>21252 Provides better range of color and obeying prompts, it requires a CFG rescale extension to work however.
>>21236 >>21237 I hate trannies so much it's unreal
(1.59 MB 1024x1536 00110-1551265082.png)

(1.55 MB 1024x1536 00010-1828639636.png)

(1.53 MB 1024x1536 00003-2371746602.png)

(1.77 MB 1024x1536 00041-722210710.png)

(1.53 MB 1024x1536 00311-3702424475.png)

(1.69 MB 1024x1536 00018-412899713.png)

(1.79 MB 1024x1536 00122-1911207473.png)

(1.93 MB 1024x1536 00104-2755580064.png)

>>21255 >>21256 Ooooh this is quite nice.
(2.51 MB 1280x1600 00121-773379432.png)

Haven't done any Jahy in a while
>>21258 i can see why
>>21219 looking forward to the results, would really like to see a usable nai-derived model without aom2's cancers
>>21260 I'm pretty confident that the way AOM2/3 was merged is the reason for all its undesirable traits such as that glow bloom. Although one thing I never understood though was why WM777 would do some weird shit like: >Add Difference @ 0.3 >AbyssOrangeMix_base + (NovelAI animefull - NovelAI sfw) nai-full and naisfw are two different trained models so AD'ing them would do nothing good unless I'm misunderstanding the use of Add Difference on values that aren't 1. Someone who is better at model merging and have time to read this could probably figure out a better way to make a less bloomy less 3D version of AOM. https://huggingface.co/WarriorMama777/OrangeMixs#abyssorangemix2-aom2 Also there is this, I wonder if the guy that posted this months back is still here. https://www.figma.com/file/1JYEljsTwm6qRwR665yI7w/Merging-lab%E3%80%8CHosioka%E3%80%8D?type=design&node-id=1-69&mode=design
>>21261 The agreed upon methods of making HLL merges seems to be mixing a main model and then adding HLL with add difference. You can also do the Based64 method of adding another flavor model on top of that result. So In theory, the best way to make a merge model candidate for dreambooths or enthusiast finetunes like HLL or fate is pick a base like AnyV3, pick a good realism model, play around with a gape or an alternative for better genitalia, do whatever voodoo mixing is required to make a new AOM. Then mix it with HLL/Fate for a new BasedMix.
What was AnythingV3? Was it a mix or a finetune?
>>21263 CREAM THE MAID >>21264 It was a merge but not sure with what, at least according to chinkman's Civitai page having listed it as a merged checkpoint. But it lines up with the old posts about it coming from a telegram channel with a bunch of people schizo merging shit together.
>1.0 A111 ran slow but it ran on my PC with --med-vram >1.6 doesn't run at all and even gave me a MEMORY_ALLOCATION bsod >ComfyUI just works and all I had to do was run the .bat that starts the UI idk bros I think cumfy won this one Going to do some troubleshooting with A111 and see if I can get it on low VRAM mode
>>21266 Hope you didnt update your Nvidia drivers this morning
>>21267 I'm on 537.13 for Nvidia drivers, is it an nvidia issue?
>>21268 Roll back to 531.18 because an update after fucked up memory usage in Stable Diffusion. Only reason it got reported in the patch notes is because it also affect AI tools in DaVinci Resolve.
>>21269 How do I roll it back though? Never done that before
>>21270 Uninstall the driver like any normal program. If you want to be sure use DDU https://www.guru3d.com/files-details/display-driver-uninstaller-download.html
(2.07 MB 960x1304 catbox_8mp81q.png)

(1.61 MB 960x1440 catbox_eqrkyn.png)

(1.89 MB 960x1440 catbox_c034gv.png)

(2.00 MB 960x1440 catbox_p011oc.png)

>>21273 dududu dudududu WEE
>>21090 can i ask for a fresh catbox?
(1.94 MB 960x1440 catbox_8kyzdc.png)

(1.97 MB 960x1320 catbox_gbsqyt.png)

(1.84 MB 960x1320 catbox_25f4e6.png)

(1.41 MB 3076x4000 catbox_27wygb.jpg)

as an added bonus, have this x/y chart of my 8dim lora vs the 128dim chinese one on civitai which was baked so hard you couldn't change her clothes
(728.14 KB 640x960 00228-0.png)

Stealing more artist oc's
>>21277 someone needs to clean her ears, they're overflowing with jpeg artifacts!
(71.93 KB 277x190 8353270.png)

(608.03 KB 640x800 catbox_a5bn2c.png)

(608.99 KB 640x800 catbox_wljxrv.png)

>>21278 I'm pretty sure her earrings just get recognized as text so it comes out jumbled no matter what
>>21280 thanks dude
/g/ was having a craze with this animation extension thing and I decided to try it out with my model and oh fuck it was a shit experience https://github.com/continue-revolution/sd-webui-animatediff The way this motion model is train is very picky because you can only use 75 tokens (75 per positive and negative) and can only do portrait or square resolution reliably. You also really don't have that much control over the animation, even if you try to boomer prompt actions. Only other catch is you need a minimum 3090 to play, but even with a 4090 this was pretty aids to work with. From the little I read it, the model injects itself into the unet to take affect, but whatever and however it is doing it's magic ripped my model apart because I had a lot of bad gens that did not respect my background or image composition prompting which just points more to my model needing a lot of work, and I can't grow any more hands kek. The color issue also seems to be a combination of model compatibility and and issue with the extension and A1111 which they are working on. Just thought I would do this to show an idea of what a ufotable-like AI animation could look like and what the future would hold for random shits like us wanting to make animation shorts in the style of our favorite studios. Also trying to run the hires within the extension runs my GPU hard and at first gave me an out of memory error (shock) and then the time it worked it took WAY too long to finish and I just got sick of trying to gacha animations while I need to work on adding and cleaning data.
(1.39 MB 1024x1536 00252-1230072765.png)

(1.50 MB 1024x1536 00000-678145001.png)

(1.64 MB 1024x1536 00254-3725078343.png)

(2.03 MB 1024x1536 00260-1240696431.png)

(1.45 MB 1024x1536 00226-3212253687.png)

(1.66 MB 1024x1536 00028-4218358561.png)

(1.73 MB 1024x1536 00409-2386351930.png)

(1.84 MB 1024x1536 00067-1191811210.png)

>>21282 >From the little I read it, the model injects itself into the unet to take affect, but whatever and however it is doing it's magic ripped my model apart because I had a lot of bad gens that did not respect my background or image composition prompting which just points more to my model needing a lot of work, and I can't grow any more hands kek. The color issue also seems to be a combination of model compatibility and and issue with the extension and A1111 which they are working on. Isn't it inconsistent for literally everyone? I really don't feel like you should consider your model's performance based on an pretty raw extension that works like ass on A1111.
>>21285 That is a valid point. I did also consider that the motion model’s training could be an issue in general along with the way it interacts with the model, as well as it not being 100% on A1111. I’m also aware that someone on /vtai/ was trying to Finetune (a) motion model and I saw it get better and better, but not entirely sure if that is the v2 model that was just shared. I’ll try reaching out the next time he pops up.
>>21283 you're a machine, anon. nice
is there an asakuraf lora? if not, might try train a locon for it, haven't tried training one before

(680.47 KB 640x800 00059-470829706.png)

>>21289 Whoops wrong reply meant for >>21288
>>21269 >>21272 >>21271 Alright, here's my poorfag testing. Using a GTX 1650 and trying to generate a single 512x768 picture using the default DPM2 KARRAS sampler with 20 steps >A111 LowVRAM: 1 min 4.5 seconds >A111 MedVRAM: CUDA_OUTOFMEMORY >ComfyUI: 38.62 seconds Not a very good look for A111 honestly. Seems like ComfyUI is still my best option
>>21291 Go back.
>>21292 brother im not shilling shit, I'm trying to optimize my poorfag gens. If there's some configuration or setting that makes A111 better then I'm willing to try it but it seems cumfy happens to work better on lower end machines like mine.
>>21293 I don't know what to tell you. You must have done something deliberately retarded to OOM on 512x768 with medvram. I was genning 1024x1536 pics without tiled vae just fine on a fucking laptop 1060.
>>21291 >have to use a GPU that is on the list of needing lower VRAM settings >get mad at the UI Really nigga?
>>21291 Did you disable the image preview and such? Did you enable xformers and such? etc
>>21297 >>21291 >Not a very good look for A111 honestly. Seems like ComfyUI is still my best option Ah, nevermind, I didn't see this. Go back.
>>21297 Yeah, I did disable image preview though the ETA progress bar is enabled. Xformers is enabled. A111 webui shouted at me that it couldn't load my model and to enable --no-half so I did that too. Once again, I'm not shilling shit. I don't have any reason to be loyal to either cumfy or A111 since neither has done anything for me and i'm pretty sure neither expects anything from me either given that I don't know either of them. I'm willing to even forward my testing results to either of them and try various extensions, settings or tweaks if it improves my gens since I'll stick with the UI that allows my low-tier PC to gen pictures regardless of which one it is. Even just downgraded my drivers just for this because I was told that SD has better performance on this version. I thought we were trying to discuss different technologies and ways to get something done, not being tribalistic over which UI you use like they're sports teams or gacha games.
(3.28 KB 628x49 msedge_Vo0Rbsp603.png)

>>21299 All I hear is a poorfag with the worst possible GPU for this >not being tribalistic over which UI you use like they're sports teams or gacha games. Word your posts better next time. You're a cumrag nigger until then.
>>21300 >Don't insult my tribe next time or I'll get angry I guess I'll just stick to comfy then.
>>21301 >tribe Rich coming from someone who's shilling the UI endorsed by literal kikes. We don't need nor want your kind of parasite here, go back.
>>21289 nice! thanks anon the faces come out great but the hands and feet are a little mangled.
>Come in with pre-expectation that the other UI is better >get mad when called out on it I don't even understand why people bother coming to troll here
>>21302 He's also lying, 6gb and less on cumrag means forced tiled vae which is very slow since that nigger has fp32 on by default so either he's making up the numbers, or he spent time looking for flags on a UI without documentation and didn't spend any on the UI with documentation (so as expected, a shill)
>>21304 The only ones angry about Comfy is you. This isn't /sdg/, the evil shills aren't going to hurt you here. But they're already in your head and you lash out every time anyone reminds you of them so they already won. >>21305 >so either he's making up the numbers How do you want me to prove the numbers then? Screenshot my console?
>>21306 >its only one guy >brings up /sdg/ >doesn't know I'm a different anon from the two he's talking about lol nice to see the only 1 guy schizo expose himself, not knowing we are /hdg/ natives.
(47.07 KB 1412x540 msedge_fDOz6A5wC7.png)

>>21304 >I don't even understand why people bother coming to troll here 4cucks is mostly bots at this point, no point in shilling to bots. >>21305 Exactly. >>21306 Try again schizo, that's not me.
>>21308 >4cucks is mostly bots at this point, no point in shilling to bots. Nah, the site went to the shitter due to people like you. Site used to be bros helping each other and now it's just shitflings, malice, tribalism and zero camaraderie. Exactly what you're doing here which proves it's the posters and not the site.
is the comfy shilling in /hdg/ that bad for you people to be this mindbroken over it? I haven't gone there in several months
>>21309 >OY VEY GOYIM IT'S YOUR FAULT HAHAHAHA THIS FUCKING KIKE
>>21309 >buzzwords from /g/ yawn
>>21305 Here's the console Pretty sad that this thread has devolved into idiots getting angry over nothing when the board was made to have a hangout away from all that.
>>21313 >when the board was made to have a hangout away from all that. We're not letting you in.
>>21313 This was never a hangout, this is a jail clearly you are a tourist
>>21289 >>21303 a lot less mangled than the last one i had. i wonder what the deal is.
>Nigga went to cry on /vt/ Unfuckingbelievable
>>21317 he did? that's hilarious, lemme check
>>21317 >Actual discussion in vtai >"sekrit club" just has screeching over shills, jews and comfy hmmmm its almost like the problem is the posters here
>>21319 The first post was literally being told to fuck off, but thanks for admitting that you are the colab poorfag AND that poster on /vtai/
God dammit fizrot anon why didn't you tell us you moved retrained shit to the multi folder, I thought shit got nuked
(2.07 MB 1280x1600 00116-888498009.png)

>>21321 To be fair it's all in the changelog https://pastebin.com/0nW739JH
>>21322 oh it's you again
>>21323 >>21322 yeah okay, my bad. Didn't know there was a changelog
>>21324 Eh it's alright, the rentry and changelog are kind of secondary to the Mega anyway. Speaking of I should probably look into a backup file host though don't see Mega going down any time soon
>>21325 Pixeldrain but it's pretty shit
>>21327 Thats not a comfy thing, its the makers of the Animatediff extension not updating the A1111 version and I think the animation avatarfag from /g/ helped them get it working on comfy only
>>21328 Comfy wins again
>>21328 Why do the UI makers have this seething hatred for each other anyway? AI shit would be so much better if these two dudes actually worked together rather than being autistic then there's their community that also acts like autistic children.
>>21329 >>21330 kys samenigger
>>21330 Wasn't there some anon here that made ComfyBox which made ComfyUI look and work like A111? I took a decent break from this place but it feels like a lot of the old anons from this place left like the fox OC guy or the Honkai anon
(1.44 MB 4000x2675 catbox_9emdy1.jpg)

(1.14 MB 4000x2675 catbox_fjh23x.jpg)

some preliminary testing (b66 really does play better with some loras than 64)
>>21333 Really wished BasedMix anon shared what was in B66
>>21332 comfybox guy is still lurking around, but yeah things have stagnated and the novelty has worn off
>>21332 >the fox OC guy training scripts anon?
(312.80 KB 4000x640 xyz_grid-0017-2055188748.jpg)

(378.81 KB 4000x640 xyz_grid-0018-2055188748.jpg)

(361.49 KB 4000x640 xyz_grid-0021-1454692240.jpg)

(407.36 KB 4000x640 xyz_grid-0022-1454692240.jpg)

>>21333 I also did some tests and found that B66v3 is the only one I would really use, but I still think B64 is more consistent across varied loras. Here's my full ranking: Based66_v3 > Based65_Final = Based65_Proto > Based66_v2 > Based66_v1 There are also situations where B65 can be better than B66v3, but B66v3 is more consistent and less stylized.
(945.77 KB 1024x1536 00446-2645346371.png)

(1.54 MB 1024x1536 00002-2019706043.png)

(1.58 MB 1024x1536 00106-1517267037.png)

(2.14 MB 1024x1536 00094-4216768624.png)

(1.49 MB 1024x1536 00328-771975863.png)

(1.57 MB 1024x1536 00248-1484552500.png)

(1.57 MB 1024x1536 00050-3248684408.png)

(2.12 MB 1024x1536 00013-19317746.png)

>>21335 I might not give two shits about the rivalry between comfy and A111 but I really liked that dude Was always very helpful whenever I had a question >>21336 I think? Honkai dude was probably the basedmix anon and apparently he was driven out by schizos. Given the above exchange with the poorfag anon getting mocked for wanting to use comfy, I can't exactly fault him for leaving if that went on for ages I guess.
>>21340 >I might not give two shits about the rivalry between comfy and A111 but I really liked that dude i always just found it funny because voldy doesn't give a shit about the rivalry and just works on the webui
>>21340 >Given the above exchange with the poorfag anon getting mocked for wanting to use comfy, I can't exactly fault him for leaving if that went on for ages I guess. Go back.
>>21341 I think everyone's acting like children. Like it's fucking technology, why do we have to act like this about technologies? I was amazed to learn that /g/ has separate threads for IEM and Speakers with both of them hating headphones for example and I simply do not understand the autism that goes into these things.
(1.00 MB 2880x2640 catbox_ry00yt.jpg)

(916.22 KB 2880x2640 catbox_atbzqy.jpg)

>>21337 So I specifically started testing it because I thought I was going insane training this locon and getting based64's grainy "painterly" effect bleeding through despite PM's style being so flat and solid. I switched over just for kicks and surprise, it instantly worked infinitely better. I'm gonna try and get the locon uploaded soon, just need to make some nice preview images. If you can't tell at first, first image is b64v3 and second is b66v3. The difference is uncanny.
(360.14 KB 4000x711 xyz_grid-0023-1454692240.jpg)

(461.76 KB 4000x711 xyz_grid-0024-1454692240.jpg)

(440.44 KB 4000x800 xyz_grid-0025-1754863528.jpg)

(521.24 KB 4000x800 xyz_grid-0026-1754863528.jpg)

>>21337 A couple more comparisons
>>21340 Anons tried to help him and then he just threw his fucking shitfit about how clearly Comfy is better, not udnerstanding what he was talking about, then goes to /vt/ and bitches about why people hate comfy. He was clearly a troll. Not our fault he got rekt by google and didn't listen that sooner or later he needed tog et his own GPU.
(1.86 MB 1024x1536 00017-4216768627.png)

>>21344 >>21345 i mean that makes sense considering 66's style itself is more flat and solid than 64
>>21346 >Anons tried to help him and then he just threw his fucking shitfit about how clearly Comfy is better, Point me to the shitfit. All I saw is the usual schizos sperging out over comfy because he dared to say comfy is better. Like it always happens, a select few seem to utterly lose their shit the moment comfy gets mentioned or specifically come here to complain about comfy getting shilled in /h/ or /g/. The drama got dragged to /sdg/ AND /vtai/ too because the poor anon dared to gen on his ancient GPU. Whatever happened to through dick unity? Is it actually dead? Or has the lack of moderation made people decide to just do whatever? Some of you honestly need to take a chill pill and relax.
>>21348 I remember when that one kemono friends anon kept posting screenshots of his ancient pentium computer and I think there was another anon with an ancient pentium too and people were nice and helpful to them
>>21348 Dude this is not a hang out spot, we aren't friends here. This is a jail, we were only supposed to be here to pass the time until our bans from pissing off a janny wore off and would go back to /h/ and continue as normal but some wanted to keep posting their stuff here. Clearly most have left, either from the novelty wearing off, or otherwise. The ones that are still here probably just want some sort of non-schizo discussion about this but now we can't have that either. No one fucking bothered us until some fuckhead trolling with comfyshit on /h/ found our link and started flooding the comfy /sdg/ bullshit here too so naturally we don't want that here and pushed back. It's not welcomed. And when you come here with this type of talk: >Comfy just works >I think Comfy won You won't be taken seriously. If you don't like that, you have the rest of the internet to go talk your smack if you aren't gonna post loli or share work you dont wanna share on 4cuck.
(1.72 MB 960x1440 catbox_yz0si2.png)

(1.37 MB 960x1440 catbox_c2nkp5.png)

(1.52 MB 960x1440 catbox_y4i9hx.png)

(1.41 MB 960x1440 catbox_nvwwfy.png)

https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/31RGyDxL possummachine locon, I know another anon already made a LoRA but I wanted to try my hand at getting as close to his style as possible
>>21349 My 3570K is still fine, I don't need to build that new rig.
>>21338 goddamn dude you're on a roll
>>21351 you should retrain the old stuff if you have some spare time
>>21350 >Dude this is not a hang out spot, we aren't friends here. Funny, every other poster seems to have fond memories and like each other except you. Why don't you post your work then? What have you contributed other than chasing some anon across three boards because he said comfy was good?
>>21355 go back
>>21348 >Whatever happened to through dick unity? Is it actually dead? Or has the lack of moderation made people decide to just do whatever? Some of you honestly need to take a chill pill and relax. "NOOOOOOOOOOOOOO YOU HAVE TO LET IN THE CUMRAG USERS EVEN THOUGH THEY ONLY COME HERE TO TROLL NOOOOOOOOO MY HECKING CAMARADERIENO" No one uses cumrag here but the tourists.
>>21355 Go. Back.
>>21352 Only 10 generations behind but still marginally better than the 2600K i was using last year
https://8chan.moe/v/res/883383.html Someone should probably make a new thread now because the limit doesn't exist anymore. Unless going until post 9999 seems appealing to you.
>>21352 Dude pay someone to help you build your PC it’s not funny anymore
(1.16 MB 960x1440 catbox_0wxjsp.png)

(833.35 KB 960x1152 catbox_v6kfzk.png)

(1.54 MB 960x1248 catbox_hsecnc.png)

(1.28 MB 960x1344 catbox_3562gx.png)

>>21351 it can also kind of do his OC possum girl Macci (tagged by name but check the metadata for full prompting) >>21354 Any specific requests?
>>21360 Oh fucking hell
>>21360 >Unless going until post 9999 seems appealing to you. Yes. >>21361 It's incredibly funny. Also no, I need to pay someone to fix my bad back first.
>>21362 >Any specific requests? fishine > ransusan > spacezin :)
>>21364 Pretty sure loading 2000 posts would be a nightmare. A cyclical thread would be better at this point now that the limit doesnt exist.
>>21364 Certainly you can scam a neighbor (not black) kid by giving him 20 bucks and a chocolate bar and have him carry your shit all day until you piece that PC together
>>21365 I have retrained Fishine more than any other lora/locon. A big issue with Fishine is that she's primarily a mangaka, manga panels do not make great data and even less so when they're heavily censored. More of her gumroad packs have been posted on ex so I could try to rebuild the dataset with images from those but I'm not sure how much better it can get. Ransu would actually be a great one to remake as a locon with my more recent settings, he's drawn a lot more since I made it as well as more characters so I'll bump him to the top of the list. Spacezin was one of my best curated datasets pre-locon phase but I can also give it a crack some time. No promises though!
>>21366 At this point I just want to see if I can hard crash my dinosaur by loading a thread >>21367 kek
(1.91 MB 1280x1600 00009-114921189.png)

(1.46 MB 1280x1600 00011-114921189.png)

(1.63 MB 1280x1600 00013-114921189.png)

(1.34 MB 1280x1600 00015-114921189.png)

>>21362 Oh yeah forgot they have an oc too. I guess I'll have to redo mine to include her
>>21368 >A big issue with Fishine is that she's primarily a mangaka What if you were to train on the doujin covers and her doujin cover-like art instead of the manga panels? Would that limit the dataset too much?
(1.75 MB 960x1440 catbox_vvf442.png)

(1.42 MB 960x1320 catbox_tg16qj.png)

(1.91 MB 960x1440 catbox_wam5tx.png)

(1.57 MB 960x1440 catbox_vyjj72.png)

>>21371 All of them are in the dataset along with every pixiv/twitter illustration she's posted. My current set is also supplemented by a fellow anon who did a bunch of manga panel cropping. I lama-cleaner'd as much censorship as I could and even bought some of the Gumroad image packs to supplement the dataset. If you'd like to take a look at how frankensteined the thing is: https://files.catbox.moe/v65m7f.7z >>21370 If you're the anon who made the other PM lora (and the amazing Lily Hopkins one), you're doing God's work. I hope you don't mind a little bit of friendly competition.
(972.00 KB 640x960 00055-1363381789.png)

>>21372 The more the merrier, I wasn't the first one to make a PM lora anyway.
>>20102 I am here but I wanted to take a hands-off approach since excessive moderation tends to kill a thread. Though I have to say recent developments make me have second thoughts because it's sad when cool anons get driven out due to 1-2 shitposters. But in case I happen to not be around, there's the global report feature at least in case someone posts some really bad shit that should not be here.
>>21374 i love you board owner-san-chama-kenpachi-dama please continue to take a backseat as it is the most based administrative method, shitposters will come and go during content droughts it is simply an unfortunate reality
>>21375 seconded
Every day until you like it
>>21375 >>21376 For example, the Comfy shitposting from earlier seems to be done mostly by two anons that complain about comfy shills and the UI. There was a third one but he limited himself to a single post. Not going to point them out since that wouldn't be cool on my part but there's that. Rest of the anons in the thread seem pretty chill and minding their own business or just sharing their work which is nice and why I decided to just let things play out and hide as one more anon.
>>21378 Since board owner showed up I'm gonna own up. I tried helping the guy who was "asking for help" with his driver issue as being a possbility for his card performance issue, but when he just started going on about how comfy was better and comfy won and this and that I lost it, it gets real fucking annoying when someone out of the blue brings this shit up and then starts acting in what looks to be in bad faith. I don't even give a shit about Comfy or the UI, I hate the people that it attracted and they just blabber on about it and turn it into this thing is better than that thing. And because its already killed a general on 4cuck, I rather not see it here too so best course is to just be aggressive about it. If that guy from earlier wasn't a shill, he needed to be better about expressing his help and frustration. I can see he was clearly affected by the SD collab ban wave that just happened and wanted to find a way to go about using his insufficient card to keep making gens, but he should never have just opened his cry for help with "guess this UI won lol". Thats all I got to say about it.
>>21379 you really shouldn't be tribalistic over UIs or allow someone to affect you that badly this is 8ch, there's like 10~20 anons at max here no actual shill will come here because there's no audience to capture in here so all you're doing is shitting up the thread if you post angrily or maliciously then anons will reply in the same way and THAT is what kills a thread because malice breeds malice and anger breeds anger which is why 4ch threads continue getting worse and worse since the trolling and anger just escalates endlessly remember the person at the other side of the post is a human like you and another anon like you possibly with your same interests so you should attempt to expect the best rather than expect the worst
>Comfy >A111 REAL MEN only need one step https://github.com/gnobitab/InstaFlow
(599.40 KB 640x800 00126-2516882584.png)

>>21351 I'm going to be picky and suggest if you can rename at least one of the example pics you include in the folder to have the same name as the model so it has a preview in the webui. lazylora anon does it and it's very convenient
>>21372 > I could and even bought some of the Gumroad image packs to supplement the dataset Holy shit, that's admirable. Although I feel like the dataset could perform better without somewhat poor quality images like those? I feel like they could add to occasional blurriness with the loras. I still used them really often though.
>>21380 >you really shouldn't be tribalistic over UIs or allow someone to affect you that badly I literally said I didn't care for the UI, its the people who talk about it. No one bats an eye about Invoke or Vlad or Fooocus or whatever in the once in a century time they're mentioned, the issue has always been the types that somehow for some reason end up being comfyUI users, kind of like how reddit general attracts a bunch of idealistic retards that don't know how the world works and speak in marvel references. They are annoying to deal with. The ones that stay quiet and just prompt don't bother me and I leave them be, but high profile idiots like on /g/ are insufferable as fuck, and when they crawl into /h/ they are easily noticible because they are the odd man out. And today, that guy felt like the odd man out but I gave him a chance until he decided he wanted to be a petty bitch and I am not sorry. And yes, I always ask the question, why do people bother coming here to talk about how Comfy is better than this or that? Why the mention of that at all to the 10 or so people that are here that dont have the interest in talking about it? These guys can talk about it on /sdg/ where its bought and paid for developer, an employee of StabilityAI, posts there every day. Why does it have to be here? I'm not saying to ban discussion of it, but its pretty clear that they have no good legitimate reason to talk about it here. >post angrily or maliciously [...] remember the person at the other side of the post [...] I disagree, the sheer privilege of having anonymity means we are at our most honest, whether in how we feel, or the way we act. I cant tell if the person on the other side of the post is someone who just jumped in because he was getting bored on 4cuck and just randomly spamming his shit or some guy who just wants to post his stuff and that's that. I've probably told every single regular here and maybe even you to get bent over something I didn't like that you posted or said one day but then the next thing I said you were based, probably even to the troll, because you said something agreeable. All because whatever the topic was at the time between those situations brought it about. There is clearly an unspoken desire to not want to talk about UI bullshit, and it shouldn't be surprising when the negative reaction happens. However that doesn't mean that someone bringing up the UI shit wants to start trouble, I even pointed out here >>21328 that the token thing being discussed in a screenshot, OFF SITE mind you, was an issue with the extension and its implementation on A1111 and not A1111 or ComfyUI problem, although I felt that the guy I responded to and the guy doing his comfy spam right after were the same person trying to get a reaction because it sounded like it was a bad faith attempt in continue the drama from earlier in the day. We aren't friends here, we don't know each other, but we decided this place was better than to deal with regarding our shit while waiting out our bans than put up with /h/ tranny jannies busting our balls for loli even if it wasn't. Some people are clearly more notable or not as anonymous because they are making certain LoRAs, post gens of a certain style, or are devs. Others aren't as standout but you get used to the way they talk so you kind of know whos who. We at least respect that recognition, but no way do are we really that buddy buddy or anything, I don't got you guys on a discord or text you on whatsapp or something. There is no need for that, there is no need for more unless something required it like a project or something. The only difference between you fucks here and the 4cuck fucks there is I am trusting you guys here to not be fucking dumbasses in the same vein like 4cuckers, particularly of the /sdg/ kind. That's it. If my expectations are broken, I just fuck off. Simple as. I will say it again, if that guy from earlier wasn't a shill, the situation could've been avoided if he wasn't being a fucking hormonal jew on his period even after getting help. Thats my last mention of this topic, don't wanna keep dragging it. I will simply ignore further posts in the future and report them. Fortunately nothing major requiring a global report has happened.
>>21385 bruh
I think we should start fresh with a new thread
>>21385 I read all this
>>21385 bro it doesnt take a wall of text if some dude feels like comfy is better for their workflow then let them use it since everyone has diff gens and workflows thats it if the end result is good then thats what matters
whats good. hows the loli drama been in the last few months since i was gone? also any new dumps/resources for loli diffusion stuff that i should bookmark?
>>21390 I didn't even know a loli difussion model existed https://huggingface.co/Undertrainingspy0014/RandomStuff/blob/main/loli_A.safetensors I've been using this model since some random JP tuned it for loli stuff and I like how it looks more than AOM.
>>21390 things have pretty much stagnated, animations are the only new thing that is popping and thats in the last two days or so. https://github.com/continue-revolution/sd-webui-animatediff
>>21372 >>21365 I gave a fishine lora a shot. Cleaned up the dataset using just the color images removing watermarks and text, grouped duplicates, and de-jpeg'd. Here's the lora and dataset https://files.catbox.moe/jx3t1j.7z
(2.79 MB 1280x1600 00146-2093665317.png)

(2.22 MB 1280x1600 00144-2093665317.png)

>>21343 Consoomerism does that. It's easy to pick a tribe after you sunk your money onto it.
>>21139 After about a dozen failed bakes I realized why all my experiments fucking sucked in comparison to this: dorontabi is an absolute hack. Most of his works are half-assed at best. My loras didn't pick up a consistent style because there isn't any. Gonna give it another shot with a smaller curated dataset.
>>21396 who says you have to have a consistent style to be an artist lol
>>21374 what has the post/image limit been raised to?
(1.40 MB 960x1320 catbox_c69eco.png)

(1.32 MB 960x1320 catbox_tezg6l.png)

(1.55 MB 960x1440 catbox_jnxvtd.png)

(1.08 MB 960x1320 catbox_ek7ch9.png)

>>21394 A valiant effort, though omitting the manga panels and by extension the majority of the NSFW works you miss out on some of the artist's most defining traits like the dappled areolae. I will probably still sift through this for the cleaned images and replace them in my personal stash for future attempts.
Is there any redrop lora laying around? https://gelbooru.com/index.php?page=post&s=list&tags=redrop I feel like I've seen one somewhere but now I can find it
(1.41 MB 1024x1536 00011-2001285843.png)

(1.55 MB 1024x1536 00020-3441970620.png)

(1.61 MB 1024x1536 00126-3952657675.png)

(2.00 MB 1024x1536 00033-2271499254.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added hiroe-rei (black lagoon manga artist)
(1.32 MB 1024x1536 00267-3148154924.png)

(1.39 MB 1024x1536 00140-1621241889.png)

(1.95 MB 1024x1536 00011-1593934549.png)

(1.83 MB 1024x1536 00933-323453802.png)

>>21316 maybe it's the artist's style on the hands/feet which doesn't translate well to the anime models >>21374 thank you for your service anon
(2.72 MB 1280x1920 00004-1123505631.png)

(2.88 MB 1280x1920 00016-1916991051.png)

(2.72 MB 1280x1920 00013-1916991048.png)

(2.81 MB 1280x1920 00011-1916991046.png)

demons!
fuck I love vpred I'm still surprised how little relatively large changes affect the rest of the image
>>21406 what vpred model are you playing around with?
>>21407 furry models, since there haven't been much effort on the anime or realistic side fluffyrock vpred is good but a bit bare, so I mostly use easyfluff with the quality tag lora
Funny how the board suddenly shifted from being anti-cumrag an anti-cumrag-shills to just "lol who cares tribe this tribe that". Fuck you and your subversive tactics.
>>21410 Literally no better than cumrag himself.
>>21410 There is still no legitimate reason to bring up cumrag in this board
>>21412 You know, aside from the shills who come here once or twice a week.
(56.61 KB 642x580 jiiiii.jpg)

>>21415 (also >>21412 is a different anon)
>>21416 if it is then im pretty sure they can see that anyway
So it was just a samefag all along minus?
>>21418 its two anons according to an earlier post here >>21378
(2.98 MB 1600x1280 00177-4190789785.png)

kek he's shitposting on /h/ too lol
>>21410 You really should reduce your /pol/ consumption before you get permanent brain damage.
>>21422 >he thinks I browse the glownigger honeypot that is /pol/ Think again faggot.
>>21418 >>21419 I've never shilled cumrag, I just gave the "well clearly cumrag won as demonstrated by my gtx 1650" shill some shit before he started his >muh tribes samefagging.
>>21424 you misread, the post was talking about "two anons complaining about shills and shitposting about comfy" not "two shills"
>>21425 >the people who complain about the shills are just as guilty as the shills Eat shit.
Told ya shoebill is a fucking nutjob.
>>21426 when there are no shills and the anti-shills shit up the board about shilling anyway, yes.
Does this board support post IDs? That would at least cut down potential samefagging and let others know who is new coming into the thread. Venting your grievances about a situation shouldn’t be policed and be met with being called a troll if the poster isn’t a new ID. If those 3 of the 4 posts called out here >>21414 were a samefag instigator that’s one thing and should be ignored, but if they weren’t then it shouldn’t be dismissed. I for one don’t like Comfy (the dev) and would rather he not be mentioned. If you use ComfyUI just keep it to yourself and don’t signal it because otherwise it just looks like you want to start problems. It’s that simple.
>>21429 Oh yeah, of course, *I'm* the instigator now. Yep, it was part of my plan all along, how'd you guess? >>21427 Like I've said in >>20958 >my anger issues have anger issues. Can't you read?
>>21429 >post IDs no thanks
I'm going to make a new thread in a bit since we no longer have a post limit and people are being schizo, thinking we should keep doing it around 1200 posts in the future just for readability. Will include the important info in the OP including the 8chan catbox script.
>>21430 if your anger issues have anger issues then i'm pretty sure that makes you the instigator because you're making your anger issues into everyones issue rather than your own problem Literally just keep it to yourself and don't sperg out about it. Nobody likes Comfy (The dev) but if the software is good then use it???
>>21433 And I'm pretty sure this makes you a little subversive cocksucker. Literally NO ONE talks about cumrag unless some shills comes here to shill outright or """subtly""" shill by pretending to ask for help and saying shit like "well clearly cumfy won because my bottom of the barrel 16 series on cumrag, voldy needs to optimize his shit for low-end hardware"
>>21434 works OOB on cumrag*
>>21434 you're insane
>>21436 You're a faggot. Next.
>>21438 >>21438 >>21438 new thread baked, suggesting we keep it around 1200 posts as before
>>21430 you don't have anger issues, you have skin color issues. you behave like a nigger because you are a nigger and you nigger up any decent place you're mistakenly allowed into.
>>21440 I'm White, try again.
>>21441 god likes to play pranks sometimes stop being a nigger.
>>21442 >god likes to play pranks sometimes I can see that. The subversiveness of a jew but the street-shitting "capability" of a pajeet.
>>21443 everyone that spends time around you thinks you act like a nigger. this is, of course, someone else's fault. you a good boy who din do nuffin wrong.
>>21444 >everyone that spends time around you thinks you act like a nigger >y-you're nigger, all my alts agree with me! Put some more effort into this, it's not entertaining.
this anon needs to go back to varishangout where he belongs
>>21448 they're gonna love the shoutout
>>21447 >go back to where you came from Sorry, can't go back to /h/, I don't have a time machine that can take me back to february/march.


Forms
Delete
Report
Quick Reply