/hdg/ - Stable Diffusion

Anime girls generated with AI

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 15.00 MB

Total max file size: 50.00 MB

Max files: 4

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

>>21438 If everyone is fine with it then we could try a cyclic thread instead now that the bump limit doesn't exist
(1.40 MB 960x1320 catbox_c69eco.png)

(1.73 MB 960x1440 catbox_btxibw.png)

>>21445 I think it's a good decision since threads start to become a monster to navigate once you hit 1000+ posts. I'd say anywhere between 1000-1500 would be a good "soft limit" before someone makes a new thread but 1200 is probably ideal since that's the old limit.
Uoooooooooh weekend soon, maybe I'll actually spend some time genning.
so close... yet so far...
>>21445 Yea, a never ending thread is a nightmare
>>21454 mother fucker it kept my name from when I posted this lol
(2.14 MB 960x1440 catbox_2vzyiq.png)

kinda want to make a multi-concept abby... but i already have such a long to-do list...
(3.08 MB 1280x1920 00012-48338449.png)

(3.71 MB 1280x1920 00002-4284310275.png)

(5.16 MB 1280x1920 00021-2750196038.png)

(3.76 MB 1280x1920 00004-288075961.png)

hatching
been trying to use the vpred merging method and trying to copy the based64 recipe or something close to it, all I got was fried shit when trying to use fluffyrock... surprisingly futanarifactor didn't look bad
>>21458 You used the SuperMerge trainDifference for making the merge right? And also have the CFG rescale?
>>21459 Didn't try the CFG rescale test yet, I'll just get the basic recipe for B64 done and then try to use traindifference for the final product
>>21460 Yea you need CFG rescale for vpred models to work as intended. https://github.com/Seshelle/CFG_Rescale_webui
>>21461 WHY THE FUCK IS IT SAVING THAT NAME AHHHHHHHHHHHHHHHHHHH
(1.88 MB 1152x1664 00205-220199213.png)

(1.49 MB 1024x1536 00014-2747990408.png)

(1.74 MB 1024x1536 00019-1016466446.png)

(1.75 MB 1024x1536 00067-707174279.png)

(1.75 MB 1024x1536 00130-1252688592.png)

(1.60 MB 1024x1536 00021-2822841430.png)

(1.67 MB 1024x1536 00022-1900987724.png)

(1.70 MB 1024x1536 00201-2919334864.png)

(1.64 MB 1024x1536 00336-247035734.png)

(2.48 MB 1280x1920 00020-361369045.png)

(2.53 MB 1280x1920 00022-2958658310.png)

(2.46 MB 1280x1920 00024-2958658312.png)

(2.43 MB 1280x1920 00025-2958658313.png)

[serafuku:leotard:0.3] is an amazing outfit
>>21406 feels weird being able to gen at 1088x1088 (or even 1536x1536) and have gens come out clean without the need for hires fix furries will probably have trained a new general purpose base model, hacked up that one new architecture to make it even better and train it on furry cocks, before we get a crumb of new models the future is furry
>Based66-v3-pruned where was this posted again?
>>21464 This is very nice and it seems to love traditional media, well most styles I like tend to work well with it. I'll post something later unless I'm stuck in MMO hell all evening.
>>21468 it is on civitai and huggingface
(1.52 MB 960x1440 catbox_y4i9hx.png)

(1.40 MB 960x1440 catbox_nowtc6.png)

>>21470 yeah I can't seem to reproduce the image from the Possummachine preview last thread
(399.08 KB 1416x1142 catbox_yzmcqw.png)

>>21471 Strange, I made sure I uploaded the correct iteration of the locon and it is indeed identical to the one I'm using. This is the specific version of b66 I am using: https://huggingface.co/AnonymousM/Based-mixes/blob/main/Based64mix-V3-Pruned.safetensors
>>21472 >640x960 how often does this res release mustard gas for you?
(1.48 MB 768x1440 catbox_v6ib9t.png)

>>21473 Never, that's the resolution I typically gen in (and then upscale 1.5 in hires) since it's a 2:3 ratio same as 512x768. Typically you're fine as long as your base gen resolution is under 1024x in either axis (ie, here's a 512x960 gen).
>>21474 I see, it often produced double heads and other anatomy issues back when I tried similar resolutions some time ago, but maybe it was due to shit loras I used.
(1.69 MB 960x1440 catbox_t3ym6l.png)

>>21471 I should add that I've tried it both loaded through LyCORIS and as a regular lora but neither manage to reproduce the image >>21472 >Based64mix-v3-pruned we're straying further
(1.57 MB 960x1440 catbox_790elz.png)

(1.67 MB 960x1440 catbox_fglp36.png)

>>21476 There's two other things I can think of that might be causing the incompatibility since the model and locon hashes are the same. One is --xformers command line argument, which I have enabled. Second is that I'm using the native locon support in webui 1.6 while you're using the extension, though theoretically those should be the same? As an aside, I switched from b64-v3 to b66-v3 in the process of making this locon because it was frustrating me just how much chromatic noise b64 adds to generations. B66 so far plays very well with all my style loras but especially ones with flat color styles. There's a pretty good chance some of my older style loras were overbaked to compensate...
>>21477 >One is --xformers command line argument xformers can't change that much, it's most likely "hardware differences" we've seen this happen before on gpus even within the same gen, wouldn't surprise me if the noise that's being (re)generated just doesn't match or something like that
>>21477 >though theoretically those should be the same? It might not be, also some samplers look different in webui 1.6.0 so it might be worth considering too
>>21477 put dark-skinned male in the negs
>>21478 doesn't auto1111 have a setting for where the seed is generated? gpu, cpu, something else.
>>21481 yes but everyone is using gpu
(1.29 MB 960x1440 catbox_kioo2k.png)

>>21480 doesn't seem to change much, prob because almost all of the m/f illustrations in pm's portfolio are darker skinned male
>>21483 i bet the (((yakumos))) are gapping them in even gensokyo isn't safe
>>21481 >>21482 Everyone is using GPU but again, same prompt on different hardware with identical software gives out different results. This would be so much easier to illustrate if I could find an old XYZ plot posted in /h/ like six or eight months ago.
>>21467 I think the funny thing is that most of these guys are just using 16GB TPUs or just 3090s to train these models and just waiting it out. They also have a lot of centralized information so its easy for them to bounce this info off each other and do testing a lot more technical and specific pieces of the training process that there is no interest in learning when just making LoRAs or simpler models. I'm doing what I can on my own but a having a base documented info and resources to get shit done makes all the difference in wanting to progress this space.
What's the minimal dataset needed for a character Lora? I'm thinking of creating an OC and will be drawing the front/back/perfil face close up views of it, not sure if that would be enough
>>21487 https://rentry.org/bp87n Replace 3D game source and use your drawings instead. As long as you have plenty of reference shots for the various composition shots you should be fine.
https://colab.research.google.com/github/R3gm/SD_diffusers_interactive/blob/main/Stable_diffusion_interactive_notebook.ipynb Is it possible to add a VAE loader to this? It seems to work pretty well as a colab generator thing but it can't load a VAE which is cripplingly bad due to how awful pics look without a vae
>>21489 I guess you could bake-in vae?
>>21489 also, does it upscale with denoise or not? if it's just esrgan upscaling it's pretty useless, no?
(1.24 MB 1024x1536 00066-2052050549.png)

(1.30 MB 1024x1536 00029-2335195777.png)

(1.56 MB 1024x1536 01021-3659056030.png)

(1.94 MB 1024x1536 00255-4168590473.png)

(1.11 MB 1024x1536 00020-2385986088.png)

(1.22 MB 1024x1536 00006-1615920692.png)

(1.32 MB 1024x1536 00061-2052050549.png)

(1.62 MB 1024x1536 00074-1945308207.png)

>>21447 catbox please?
>>21492 Damn man, another great one. How do you feel about this style? https://gelbooru.com/index.php?page=post&s=list&tags=e.o.&pid=42
>>21492 Any chance you can do a csrb lora? I was planning to do it since the only one is pretty old, but I never got around to it.
>>21449 been saving a lot of your gens, very nice
>>21495 yeah i'll do that one >>21496 got a link to their art? couldn't find an artist with that name
>>21498 Oh, I mean cutesexyrobutts.
Do we have a Pochi lora? >>21496 Pretty sure one exists already unless its in need of an update.
>>21499 yeah i have a dataset for him. just been putting it off since i don't want to go through the shitload of images he has lol
>>21500 There is a csrb lora already but it was made pretty early on. It's not bad, but I think it can be better with how much Loras have progressed
Some anon managed to brute force their way to some site name TooningMagic's devbox and got access to their huggingbox. https://tooning.io/magic >Anyway, here's the base models with their loras, you can download their main 4 "styles" here (the base models are some civitai models merged together with custom lora/loras): https://magic-excel.s3.amazonaws.com/wow/toonMooMoo.zip https://magic-excel.s3.amazonaws.com/wow/toonRiley.zip https://magic-excel.s3.amazonaws.com/wow/toonSera.zip https://magic-excel.s3.amazonaws.com/wow/toonYuna.zip https://magic-excel.s3.amazonaws.com/wow/tooning_src.7z >Yes, I just unprivated those repos because I have access, I doubt they'll stay up for very long now. >picrel is the strengths for those loras, I'll also unprivate other ones shortly >I might publish a bigger source code leak later. [Already added above] >The most interested ones in this will probably be Koreans, so tell them if you know any. If the links stop working I got the stuff downloaded just incase. Not sure if the shit is any good. Guy says the models look like merges with some private shit but that the LoRAs are custom.
>>21503 did the anon post a torrent?
>>21504 anon... I would've posted one if he did. He basically used his unauthorized access to make internal download links within their account after they shut down their huggingface repo. Id just VPN and download them directly.
>>21503 I've seen that post, aren't those just some meme shitmixes?
>>21506 the models are probably shit, the LoRAs seem to be the only unique stuff. I haven't tried them yet however.
(2.06 MB 4000x3938 catbox_tt71hw.jpg)

(1.97 MB 4000x3872 catbox_ecomfq.jpg)

(1.63 MB 4000x3938 catbox_ughcii.jpg)

(2.33 MB 4000x3938 catbox_34ba88.jpg)

I'm working on re-making my ransusan LoRA as a LoCON based on a request from the last thread, figured I would share some progress and ask for input. artist-ransusan is the old 128dim LoRA, smallest dataset with medium curation ransusan-000024 is a 16dim LoCON, dataset is larger but more heavily curated ransusan-000032 is a 16dim LoCON, even heavier curation, dataset has speech bubble, japanese text, and Friend names that aren't auto-tagged added to each caption file Right now I'm leaning towards ransusan-000024 as my release candidate though I may do a version with only Kaban and Serval tagged for characters. Kaban consistently comes out well, as expected, but Serval is a mixed bag since there are few images of her in her full outfit.
(1.99 MB 960x1440 catbox_3g7obn.png)

(1.83 MB 960x1440 catbox_e2y6ln.png)

(1.34 MB 960x1440 catbox_kllyy6.png)

(2.20 MB 960x1440 catbox_ja09o6.png)

>>21508 These are some of my top picks from ransusan-000024. I should add that since this is a style LoCON the style is the focus and being able to prompt Kaban/Serval is really just a side effect of them being so prominent in the dataset and ideally you should be using character LoRAs if you want to prompt someone in particular.
>>21503 > If the links stop working I got the stuff downloaded just incase. Can confirm it just happened. I've got "access denied" on every link.
>>21503 from the previews on their website their models and loras don't seem to stand out any more than the average civitai slop, though I suppose that makes the fact that they're paywalled and got leaked even funnier
(2.81 MB 960x1440 catbox_dzoz0y.png)

(1.15 MB 960x1320 catbox_33v542.png)

(1.67 MB 960x1440 catbox_k3iy98.png)

(2.29 MB 960x1440 catbox_9jjl8u.png)

Alright, after several more trials I decided to scrap tagging less frequent characters + speech bubbles and settle with my original 24 epoch LoCon. It's REALLY good at Kaban and good at Serval's hair/ears/tail but not her actual outfit. Attempts to make Serval more pronounced overfit the LoRA on character details so I had to make concessions. https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/KxwmEZgQ I'm still open to suggestions/requests on improving my other old character/style LoRAs if anyone is interested. Only exception is Fishine, unless you want to buy more of the Gumroad image packs and upload them on ex so I can supplement the dataset.
>>21511 Yea I can understand paywalling a finetune ala if the furries locked down their models, but a bunch of civitai merge shit? Hilariously pathetic. I wonder how many idiots actually paid for this.
>>21508 >>21509 >>21512 Thanks for doing this again (again), I'll grab a bite to eat and look for a way to properly wake myself up before I go insane from how little I was able to sleep today
Is anyone willing to make a new Henreader lora? the last one we got was in January.
(1.19 MB 1024x1536 00296-2854278531.png)

(1.86 MB 1024x1536 00033-3746375817.png)

(1.88 MB 1024x1536 00319-1045218923.png)

(1.86 MB 1024x1536 00002-1102597370.png)

(1.03 MB 1024x1536 00117-2738356785.png)

(1.33 MB 1024x1536 00014-1197138623.png)

(1.41 MB 1024x1536 00069-3698989288.png)

(1.70 MB 1024x1536 00262-723553591.png)

So I'm working on a character Lora and have some questions, what if my character has 2 different hairstyles, should I make a different Lora for each hairstyle or just tag each one with a specifc word like "HairstyleA", "HairstyleB" What about if my char wears a hoodie most of the time and I only have like 10 images where the hood is down
>>21520 Depends how complex and how flexible you want your lora to be. If you want it to be really good at one character in one outfit then prune anything that isn't exactly that. In general I autotag and then manually edit tags for consistency and add tags for important details that autotag might have missed. Depending on your dataset you may need more repeats for alternate outfits/hairstyles. I would suggest to avoid custom tags, just prompt them normally but that's my preference and I think it allows for more flexibility. As for the hood down, if the "hood down" doesn't work (assuming dataset is tagged properly) then increase the repeats on those images.
Wouldn't mind some testers YUhSMGNITTZMeTl3YVhobGJHUnlZV2x1TG1OdmJTOTFMMmxpVGtSMVYybHU=
>>21522 What is that?
>>21523 super mario 64 code, if you know or hanged out on /t/ long enough you should know what to do with it.
>>21522 you clever fuck, downloading
>>21524 why encode it though its not like theres any word filters or anything
>>21526 not him but it keeps brainlets away
>>21527 idk it seems pretty stupid to me >You need to have browsed or lurked in this random 4cucks board to test my stuff or else you're a brainlet
>>21526 Most of my links are encoded, and I'll be reworking the model depending on the results so keeping it mario64'd will make sure that it has an easier time of being left behind.
>>21529 So I take it you are back from your slumber?
>>21530 Got a message on huggingface asking me to do the previous model without HLL4. I don't have the specific recipe anymore so I just decided to do the next version, I was thinking of testing out vpred and zSNR but I'll be waiting on that once HLL anon manages to do that with their next finetune.
>>21531 Yea that was one of us. We weren't sure if you were ever coming back and wanted to do test with the vpred and zSNR with a model we already knew worked best.
Downloaded, let me start screwing around with some saved images
>>21526 >>21528 anon i never go on /t/ for torrents and i figured it out in like 5 minutes of googling
Out of curiosity has anyone tried out Anyv5 and its variants? Was curious about comparing v3 and v5 for my own thing independent of what was just linked.
>>21522 >>21522 >YUhSMGNITTZMeTl3YVhobGJHUnlZV2x1TG1OdmJTOTFMMmxpVGtSMVYybHU= am i missing something or isn't this model exactly the same as 64?
>>21536 Not him but I just did a test and the culprit is defmix red used one of my models mixed with defmix red and I got a very very similar gen
>>21536 >>21537 yeah there's weird shit going on with the pruned version, not sure why but the pruned version copies your previous model while the unpruned version has the "actual" model, I'm uploading it to gofile right now so just give it a bit of time. I'm going to have to look at what's the culprit of that and why I can't use the clip fix tool
>>21538 >yeah there's weird shit going on with the pruned version, not sure why but the pruned version copies your previous model while the unpruned version has the "actual" model Oh shit you are right. I reloaded my UI and then it carried over that model I had last loaded
>>21538 >>21539 Also unprunned models allegedly do better with the animation extension and motion model people are playing with right now so I do appreciate it that even when you fix the pruned problem that you just include an unpruned version as well.
>>21539 Yeah, I was using supermerger this time around and the later merges in the recipe couldn't have the clip ID tensors checked out or fixed (some weird shit about "this model is an SD2.X/XL model). No problem though, the unpruned version is pretty much what I'm going for appearance wise so the "fixed" version isn't going to be that different.
>>21541 Ok, I'll just wait for the download link, start it, then go to sleep, I should've been in bed 3 hours ago kek
>>21542 yeah here it is aHR0cHM6Ly9nb2ZpbGUuaW8vZC9KelJkZGw= tried again with supermerger to do the first add difference merge and it just gives me the dumb error with the clip tensors checker extension. Not sure what the heck happened with supermerger recently because all the previous models I made with it didn't have that error.
(2.31 MB 1024x1536 catbox_97hhku.png)

(2.24 MB 1024x1536 catbox_e8pabi.png)

I'll do some more testing later, time for more vidya.
(6.88 MB 2048x3072 grid-0016.png)

(6.95 MB 2048x3072 grid-0017.png)

(6.77 MB 2048x3072 grid-0019.png)

>>21543 i like this way more than 65 or 66. it's like 64 but with less aom-2.5d style by default
>>21545 good to hear, more tests are welcome especially with multiple LORAs just in case I'll have to lower the weight sum value of the last part of the recipe to fix the issue like last time.
>>21522 Your shitmix VAE just NaNs instantly.
(4.92 MB 2048x3072 grid-0022.png)

(4.63 MB 2048x3072 grid-0024.png)

(5.91 MB 2048x3072 grid-0021.png)

(5.15 MB 2048x3072 grid-0023.png)

>>21546 left to right: based64 no lora, based64 with lora, based67 no lora, based67 with lora seems to work well. 4 loras all at 0.8. i trained the loras while testing grids on 64 so obviously it'll be slightly more accurate on that model
>>21548 kek of course it fucked hte order. well replace left-right with 21-24 >>21547 see >>21543
>>21548 Damn this looks promising. Was getting tired of the aom 2.5d and blur that based64 has out of the box.
>>21527 >wasn't getting anything useful from decoding >googling for solutions, started feeling like a complete brainlet >there was a symbol missing from a copied string god damn it
>>21543 Looks good from the previews others posted, giving it a download to try.
>>21512 >>21514 gomenasorry the sleep deprivation got to me and I ended up drinking until I passed out lol
do the IN layers get used during generation or only during img2img? I thought the denoise generation procedure happened in latent space, so only after all the IN layers had already run, then the latent representation goes through the OUT layers to become a pixel image again.
Comparing this based67 model to based66. I gotta say, I'm not impressed, as one of the apparently few that liked 66.
maybe see if the noise offset can get mixed into the next basedmix https://civitai.com/models/10391/noise-offset-for-true-darkness-in-sd
>>21556 stop using noise offset, it's terrible
>>21556 just use a lora for it man
>>21557 >>21558 ok ok! I'll never mention it again. just wanted to gen night scenes without fighting against the model wanting the image to average to gray.
>>21559 Just use LowRA
>brat realizes correction is imminent
(1.66 MB 1024x1536 catbox_0uf5na.png)

(1.82 MB 1024x1536 catbox_984jfo.png)

(1.41 MB 3208x4000 catbox_uyfstp.jpg)

(1.28 MB 3095x4000 catbox_83xpvi.jpg)

(1.34 MB 3095x4000 catbox_9tmy3h.jpg)

(1.56 MB 3496x4000 catbox_t0kz3t.jpg)

>>21522 I made four test grids: style LoRA with base model character, style LoRA with a non-character/"OC", heavy style LoRA with a character LoRA, and two style LoRAs with a character LoRA First impressions: it's closer to b66 than b64 and retains the high compatibility with flatter color styles. I think the lack of HLL might be visible in the cowgirl grid, gamers probably make up a large amount of their dataset and so the kemomimi tends away from the fox ears to equalize with the cow-related tokens. The Suika grid indicates that the chromatic noise is somewhere between b64 and b66 since there's still quite a bit of it going on. Interesting things happening with the lighting in the b67 Junko grid, though it's hard to tell exactly what is changing it there. I'm not really sure how to provide direct feedback since it's hard to directly control how certain aspects of a block merge behave. I like the lighting and the noise being a middle ground, it feels like a good start for "basedmix without the HLL" but something is missing and I can't place it. I do want to mention I've switched to using b66 as my daily driver model over b64 because of it being less noisy, it just took a while for me to see the advantages of it playing nice with a wider variety of style LoRAs.
(1.69 MB 1024x1536 00046-3648113525.png)

(1.77 MB 1024x1536 00014-3293323123.png)

(1.83 MB 1024x1536 00096-2444636141.png)

(1.91 MB 1024x1536 00076-3751918655.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added hiramoto akira (prison school artist)
(1.43 MB 1024x1536 00124-413444902.png)

(1.87 MB 1024x1536 00591-2803126531.png)

(1.78 MB 1024x1536 00298-968407095.png)

(1.87 MB 1024x1536 00412-3683539931.png)

>>21548 >>21555 >>21563 Thanks for the feedback, the thing that might be missing is defmix red since I decided to take it out, HLL is still being used but it was a mix between the latest version 5c and 3-final pruned. This version I made was done with 3-final pruned but if it has issues with the cowkini fox girl I'll add defmix red in again aHR0cHM6Ly9nb2ZpbGUuaW8vZC9jajk2SHg=
>>21566 Is there any chance you can just remake based66 except without hll?
>>21567 I don't have the specific presets I used for b66 so it would be vastly different so I just decided to do 67.
>>21566 >mix between the latest version 5c and 3-final pruned isn't 3.1 the best one for mixing though? the later ones feel more overfit on vtubers
>>21569 yeah but it seems like 5c didn't have as many issues as hll4 so I wanted to test it out
>>21566 One day I'll learn what this code means.
>>21568 If you merged using SuperMerger it saves a log for each merge you do in its extensions folder as an MS excel sheet, might help to revise and refine mixes.
>>21571 my man there is a single scheme that looks like that
>>21572 yeah but there was just the weird issue of supermerger causing an issue where models are saved as a 2.x/SDXL model according to the webui check tensors extension which then leads to the model toolkit showing there's a ton of broken clips in the model and you won't be able to prune it properly.
>>21574 Another case of XL ruining everything lol
>>21575 yeah supermerger didn't have those issues in the past, I can try using a previous version of the extension and seeing if it fixes everything because the metadata saving option is useful for the individual parts of mixes
>>21561 >>21562 ekune's imminent penetration lora? I tried his other loras and they were cursed
>>21571 you're not missing out on anything for now.
>>21578 lel just means that its working as intended
>>21579 I mean, it's kinda pointless to be secretive on the already secret board. but I'm glad you're back so I won't spoil your fun.
>>21580 I >>21579 (me) am not him.
>>21573 thought it was a torrent magnet but didn't work or I put it in wrong. >>21579 >secret gate keeping on the secret board Surprised there isn't a secret sub-thread with a special handshake to get in.
Nice to see the gayshit entrants/tourists getting filtered
>>21580 ...it's not exactly a secret when the instructions to find the model are in its name and everyone is referring to it by name
>>21577 Correct, well this one is alright and you can combine it with his pov doggystyle LoRA.
>>21583 Who are you filtering? Like seriously? The first few threads were everyone agreeing to spread the word of the board and GROW the community so this place doesn't stagnate and die and now it's all about "filtering migrants"
>>21584 The first code didn't work Like if you run >>21522 through a decoder then it's absolute gibberish but the other codes seem to work. I tried to point out that it wasn't working for me but all I just got was "Heh, you should have browsed /t/ or just know" but the other ones decode properly so idk what the fuck was with that.
>>21587 that code leads to a broken version of the model anyways so decoding it doesn't really matter.
>>21587 If only you tried harder with that giberish...
(1.87 MB 1536x1024 catbox_8gbhck.png)

Kek, I may clean this one up and inpaint it.
>>21589 I'm not going to go full forensics on someone wanting testers. Sorry but absolutely not sorry.
>>21591 brainlet stopping at first failure
Yeah seems like it was the SDXL option breaking supermerger, fucking meme model merging has ruined the easiest way people can fix broken clip tensors. Just have to hope the extension gets an update with the option to reset or skip the CLIP ID like the MBW extension lets you (this fixes any chances of broken clip tensors with any future merges you do)
>>21593 try reaching out to the dev maybe?
>>21594 someone already pointed out the reset/skip CLIP ID option to the dev, it's the only way I know that any future merges made with the extension won't have any broken clip tensors
>>21593 I think model toolkit can fix broken clip keys
>>21596 yeah tried it out, might have to use it for all the merges with the latest commit for supermerger because even the starting add difference model will show up with broken cli[s on this version.
Talking about models and shit, has anyone tried these models? https://huggingface.co/JosefJilek/loliDiffusion/tree/main https://huggingface.co/Undertrainingspy0014/RandomStuff/blob/main/loli_A.safetensors I'm highly interested in anything that makes genning loli easier or nicer and I heard these two are pretty good.
>>21598 Forgot to add but apparently the lolidifussion guy is uploading the updated versions of his model to pixai for whatever reason https://pixai.art/@josefjilek/artworks/models
>>21599 i hate the botnet
>>21599 are his models a finetune or a merge?
>>21601 Pretty sure it was a dreambooth
>>21599 https://www.aipictors.com/works/22084/ The Loli_A model is apparently from some japanese dude or whatever, makes some cute lolis.
(1.59 MB 960x1344 catbox_u7lq4b.png)

(1.61 MB 960x1344 catbox_jte1dm.png)

Has anyone tried to make money off selling prompts or their AI knowledge? I'm poor as fuck but I've picked a fair bit of know-how and it'd be nice if I could use it to fund myself an entry level GPU or even a full on PC
(2.33 MB 1280x1600 catbox_8kijov.png)

(2.17 MB 1280x1600 catbox_c3xj4o.png)

>>21605 there were some guys making money off pixiv fanbox early on but i wouldn't count on succeeding the same way now
(2.03 MB 1280x1600 catbox_xm5k08.png)

>>21608 Stop it. Please. :'(
Looks like the guy that leaked the TooningMagic models yesterday struck another company. https://s3.us-west-1.amazonaws.com/dropbox10.23/aihub/secure_xl_danbooru2022_vae_bs24_3e5_13200-step00054400.ckpt https://s3.us-west-1.amazonaws.com/dropbox10.23/aihub/secure_xl_danbooru2022_vae_bs24_6e6-step00020000.ckpt An alleged dabooru trained XL model from "AiHub", some Korean company. Don't know shit about it. It looks fried from what I have seen from posts on /g/. To me it only just confirms that XL was a rushed hack job in an attempt to try and get suckers other than Ashton Kutcher to try and give them funding.
>>21611 ewwwww
Things are getting spicy
>>21610 >ckpt Why?
>>21615 You have to eat all the pickles
>anons found a paywalled model >anon is extracting the mode from the exposed SD session Kek
>>21616 I'd rather not I ate an entire can of pistachios yesterday and I already have the shits (worth it)
>>21617 Share the goods anon He's searching them using services.http.response.html_title:"Easy Diffusion" services.http.response.html_title:"InvokeAI - A Stable Diffusion Toolkit" services.http.response.html_title:"ComfyUI" On Censys but apparently Shodan works too
>>21615 Korean tech startups are not smart it seems The guy also just dumped a bunch of exposed SD webui sessions (A1111 gradios, InvokeAI, etc) and has been ripping and dumping models for the past 20 minutes Even got a mode /sdg/ was seething over last month about being worth 50 bucks
>>21620 He's offering free access to the H100's too Shame I don't have any models or loras I'd want to try
>>21620 >about being worth 50 bucks .... eh? sure, it's paid but didn't some sperg donate 300 bucks to the monster girl model?
>>21621 Yea but through the UI, and I’m pretty sure you can’t install Kohya as an extension and train directly from the UI. >>21622 Yea, he scored 300 on a one time purchase from a guy on pixiv. This guy has a gumball page selling $50 a pop and has 185 sales.
>>21610 >>21613 kindly reply if anything decent comes out of this, i'm off to bed
>>21624 So far only one paywalled model that can do pixel art was leaked https://s3.us-west-1.amazonaws.com/dropbox10.23/retro/RetroDiffusionModel.safetensors At least this something that /sdg/ wanted, people are mostly just jumping between exposed sessions and playing around with what’s in it.
One of the H100 guys is actually using a Lambdaspaces GPU and seems to have been working on finetunes of all sorts. I asked the guy to see if he can dump everything he has including training parameters.
(1.61 MB 1024x1536 00029-2460767566.png)

(1.79 MB 1024x1536 00096-1020117811.png)

(1.84 MB 1024x1536 00004-3838098296.png)

(1.70 MB 1024x1536 00046-4119832748.png)

(1.76 MB 1024x1536 00064-1071292635.png)

(1.81 MB 1024x1536 00031-399314792.png)

(1.75 MB 1024x1536 00130-758014280.png)

(1.83 MB 1024x1536 00026-4294517336.png)

(8.06 MB 2720x4096 catbox_mf1mdf.png)

>>21512 if you're open to training new artists, I figure I could ask for a SPAGA lora. they have a very unique style. a lot of their work is locked behind fantia but there's still a lot of their works on boorus. pic unrelated
https://mega.nz/file/WqBy2Twa#LAyX6D09qK3lqS4AWKy8ZicNDkpqKrqv4LUgLC-CBbY I reuploaded one of the pixai lolidifussion models to mega. Seems like this guy made a mix using Based66 so now it's going to go full circle. I'll try reuploading the Anything and the AOM ones too in a bit.
>>21629 Nta but Spaga is on my to-do list so I could start working on it tomorrow
(3.37 MB 2048x3072 catbox_eqruxm.png)

(5.07 MB 2048x3072 catbox_9yrslq.png)

(3.39 MB 2048x3072 catbox_xdy3ok.png)

>>21631 that would be fantastic. call me a schizo but i had a dream where i was genning with a spaga lora. i'd make it myself but my specs don't really match up with lora making.
>>21607 Good ol' days. Funded half of my 3090 that way. Now I can train at 768+ while watching streams/gathering more datasets/whatever when I would have gotten a frozen browser on 8GB VRAM.
Well now that the dust settles, the leak fiesta that happened didn’t really produce anything in terms of new models but anons managed to start logging into unsecured webui sessions. Someone is trying to see if they can train a LoRA but no proof of them getting their stuff together to train has been posted so it should be a funny situation if it does happen.
>>21625 >So far only one paywalled model that can do pixel art was leaked The 50$ one?
>>21636 >>21625 Oh yeah it's that one. Did the separate 32x version also leak?
>>21635 Realisticly the only leak that would matter is something like a second NovelAI leak or *journey model leak, but none of those are gonna happen by the means these other leaks did.
>>21638 >a second NovelAI leak Eh, once they train that new model with jewidia gibs then sure. Right now it's just a new UI and a furry model that's probably lightyears behind the community ones.
>>21585 hnnng
>>21639 We can't expect God to do all the work
(2.45 MB 1280x1600 catbox_jc91go.png)

(2.83 MB 1280x1600 catbox_txgf6j.png)

Oops left this one in the oven too long >>21609 Sorry I had to, the Henreader lora reminded me
>>21604 AI is so good at hallucinating horny outfits
>>21644 >that cameltoe kek
(530.96 KB 992x1403 95318487_p7.jpg)

Is this kind of angle possible or is it too extreme?
>>21646 Not extreme to shove my face into Real talk though, I'm not sure how training would react to that
>>21646 extreme poses are promptable but you're going to have to make an openpose skeleton for controlnet to infer them
(1.86 MB 1152x1536 catbox_a6xold.png)

>>21646 what >>21648 said
>>21647 posted it therefore dibs Also just for proompting, not for training >>21649 This is just controlnet lineart right? Would the skeleton really work for this? >>21648
>>21650 >posted it therefore dibs already saved it, there is nothing you can do about it
>>21651 malding rn
(2.39 MB 1024x1536 00013-2393670754.png)

>>21646 Sniff.
>>21653 cute youmu!
>>21653 smells like freshly cut grass
>>21656 >Nice! Is this Youmu-kun lora? Thanks. Yeah, and I'm still stubborn to upload it, along with all the others I have lying around. I shared a few very obviously overtrained LoRAs back in January and I've been embarassed to contribute anything else since. I'm not very happy with current ones I trained (now months old) since I don't feel I fully grasp hyperparams yet either. Seems wrong to contribute back something that probably sucks.
(2.38 MB 1024x1536 00000-305399800.png)

>>21657 Thank you. Cute choco as usual! >>21658 That's alright. I've seen swimsuit Youmu on aibooru with that lora and thought it looks pretty nice, but I understand if you're hesitant to post it.
(869.81 KB 1024x1536 00145-1129740389.png)

(1.49 MB 1024x1536 00056-3246740949.png)

(1.63 MB 1024x1536 00172-4092554762.png)

(1.95 MB 1024x1536 00275-3079029457.png)

(758.37 KB 1024x1536 00304-146740117.png)

(1.43 MB 1024x1536 00049-2650524658.png)

(1.55 MB 1024x1536 00102-2203034671.png)

(1.78 MB 1024x1536 00072-1870461309.png)

(1.92 MB 1024x1536 00043-3330123068.png)

>>21662 me too anon. the stylized flat color artists with blocky hands like wamudraws or inkerton always do hands well. if you know any more good artists like that, definitely let me know
>>21663 Not him and not for porn but I love this guy's stuff, the style is adorable. https://www.youtube.com/@Cermrnl/videos
Has anyone had luck generating poses like this? AI always shits the bed for me whenever generating any slightly complicated poses
>>21665 If what I posted above already needs controlnet + ideally a rig I don't think you can do that with t2i alone in this decade
>>21665 >>21666 ......... wait a moment what's stopping us from flipping them up, training, proompting then flipping them again?
(1.54 MB 1440x960 catbox_9lskzv.png)

eyebrows AND lips peropero
(565.71 KB 640x800 00253-548869077.png)

(517.41 KB 640x800 00254-548869077.png)

(561.53 KB 640x800 00255-548869077.png)

(2.24 MB 1280x1600 00263-1184634785.png)

>>21669 it seems to work, but i kind of hate how anime models will unnecessarily add detail i didn't ask for. i'll have to play around with forcing a more flat style, i think. these are from based64, maybe i'll try it on other models
my VPN randomly got a 3 day ban on 4cuck for replying to a non-drama post on /sdg/ about the unsecure webui connection because it was "off-topic" I hate /g/
>>21671 >banned for raising legitimate security concerns on the unofficial troonix board sorry to hear but that's hilarious
>>21671 Almost all VPNs got rangebanned during last 1-2 years. Few years ago you could get by even with absolute normalfag shit like Nord, but now even my VPS is rangebanned lol. >non-drama post on /sdg/ about the unsecure webui connection because it was "off-topic" Any other posts deleted there? This sounds pretty insane tbh.
>>21665 Most models will fail most of the time for upside-down faces. I'd say that you need to train an upside-down face LoRA to make the pose work consistently.
>>21674 I really doubt that a lora would be enough for this
So after NaN bug appears, the vae is converted to 32bit, does this stick for every gen after that happens? Just had a NAI vae nan bug when genning initial lowres pic, it got converted to 32bit, and after that I OOMd when img2img upscaling with 840k while usually it wouldn't OOM for this resolution.
>>21665 One tip I have for this: when inpainting the face, make sure to flip the image
>>21653 catbox?
>>21678 https://files.catbox.moe/2l47zv.png upscaled with remacri/restart
>>21679 thanks, really like the style. that is a LOT of loras
>>21680 >thanks, really like the style. You're welcome >that is a LOT of loras Not even my final form, I was stacking 11 loras recently. Kinda fun to find what styles work well together.
>>21679 Zankuro really is the mayo of style LoRAs.
(5.42 MB 1536x2304 00054-2274830234ed.png)

>>21682 can't imagine using based64 without at least 0.2 zankuro, it's just straight up improvement for linework and curves.
(549.43 KB 1024x1536 catbox_aw6yzm.jpg)

>>21679 >upscaled with remacri/restart what does the restart sampler do? reading a bit about it, it's apparently a best of both worlds between SDE and "ODE" samplers?
>>21684 I liked how it looks for upscales both for low and higher denoise, feels like it adds to the picture without fucking up things, and doesn't need many steps for good-looking upscale. For initial gen 2M or SDE Karras (not 2M SDE) are still my preference. >"ODE" https://stable-diffusion-art.com/samplers/#Old-School_ODE_solvers ODE refers to oldfag converging samplers I think
(599.73 KB 1024x1536 catbox_pg8c7w.jpg)

>>21687 cool, will try out restart some more. I was also quite liking DPM++ 2M SDE Exponential for the hires upscale. that was a really interesting read too >>21686 it's a bit of luck, that lora mix seems to work pretty nicely with the prompt tho https://files.catbox.moe/lj5pov.jpg
I fucked up the dynamic prompt in those, was using an old UMI one that supported % markers.
>>21688 Thanks buddy.
(1.75 MB 1024x1536 00046-3573027891.png)

(1.73 MB 1024x1536 00107-2264715656.png)

(1.82 MB 1024x1536 00001-4224335041.png)

(1.78 MB 1024x1536 00038-3981765275.png)

(1.54 MB 1024x1536 00108-3646146393.png)

(1.60 MB 1024x1536 00019-3395447332.png)

(1.76 MB 1024x1536 00042-3259160667.png)

(1.75 MB 1024x1536 00155-1224031756.png)

>>21684 It's some witchcraft that just works. I can slap that shit into upscale, denoise it at like .6 and it'll clean the image up great without super fuck ups. Marginally related I was down bad with anon's cleft of venus lora he re-made.
>>21688 What a cute smile.
(2.40 MB 3194x3780 110181179_p0.jpg)

Does this style have a name? It's from https://www.pixiv.net/en/users/16947703/illustrations but there probably isn't enough material for a style LoRA and I swear I've seen this kind of style before.
(1.57 MB 1024x1536 00054-1262677621.png)

(1.70 MB 1024x1536 00003-4012117225.png)

(1.84 MB 1024x1536 00075-4199113962.png)

(1.75 MB 1024x1536 00778-599405849.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added cheshirrr >>21695 lol painterly with messy brush strokes i guess
(1.25 MB 1024x1536 02310-3161644646.png)

(1.73 MB 1024x1536 00078-2900360063.png)

(1.82 MB 1024x1536 00415-3082486014.png)

(1.86 MB 1024x1536 00266-2104705030.png)

>>21696 >lol painterly with messy brush strokes i guess Yeah it's just messy brush strokes but it's generic enough that I feel like it should have a name
>>21456 make one for her sexiest summer 1st ascension, don't think I've seen it done yet...
(3.24 MB 1120x1536 catbox_2ppsxv.png)

(2.89 MB 1120x1536 catbox_ickc94.png)

in the loractl documentation it says you're supposed to use a semicolon to separate the hires fix arguments if you're using them. this is wrong, it's supposed to be a colon. if you use a semicolon, the lora gets applied at full strength across all of both generations regardless of settings.
>>21701 holy SHIT the skin detail on the first one
>>21702 euler a with restarts is really good
>Hires sampler: Restart i'm assuming i finally need to git pull off of a pre-gradio-update commit to use this, right?
>>21705 wh what the fuck how do you do it
(2.14 MB 1280x1600 00226-1429744380.png)

>>21706 Not gonna pretend it's witchcraft or anything I just have a general setup and workflow that works pretty well I also tend to like these kinds of styles
>>21701 >>21708 Actually, no, you're just misunderstanding the documentation, and I misunderstood your post too. The only time it mentions semicolons is for when you're specifying multiple "weight@step" blocks. That accepts either a comma or semicolon.
>>21710 Oh, then that's a typo. I'll make a PR.
>>21711 thanks
other version of Based67, adjusted some stuff so non-Lora gens have a more distinct style. Might be the one that i officially release but would like some testing done https://gofile.io/d/o6AFPr
>>21713 cool, downloading you mind sharing the final recipe breakdown when you officialize the model?
>>21699 the generic term would be rkgk
>>21714 sure, just don't share with the zhangs
(2.74 MB 2560x1024 xyz_grid-0001-183536418.png)

(2.11 MB 2560x1024 xyz_grid-0000-183536418.png)

(608.70 KB 640x800 00007-183536418.png)

(666.65 KB 640x800 00006-183536418.png)

>>21713 All the Based67 versions have been pretty solid. Not sure why but running P7 through model toolkit shows broken unet and vae and outputs a broken model. Though I mostly just use it to prune models and bake-in a vae and it's already pruned so not a huge deal.
>>21717 yeah the latest version of supermerger has a weird habit of having those broken portions with merges now, I had to roll back to a previous version of the extension to fix that
(2.88 MB 2560x1024 xyz_grid-0028-4118321928.png)

(2.75 MB 2560x1024 xyz_grid-0029-4118321928.png)

(2.92 MB 2560x1024 xyz_grid-0026-4118321928.png)

(2.84 MB 2560x1024 xyz_grid-0025-4118321928.png)

>>21717 IMO Pβ would make a good release candidate. It's different enough from B64, has some stlye and good lora compatibility. OA-P9 is a little too stylized for my use, the style is good but it treads a little close to the less lora-compatible category >>21718 I guess that might explain why there were two β versions
>>21719 OK I'll keep that in mind, I'll try to adjust OA-P9 a bit more before I make my final choice.
Just finished pushing through the last of the KnK film frame extractions and initial clean up (deleting blurred frames, undesired effect shots, frame transition/super impose/opacity changing shots, etc) that I tasked myself to get done this month and now running the deduper to have my dataset sample sorted. I also have a chunk of frames I set aside that need to be cropped, stitched, and layer together that will probably take up the rest of the week, then manually check composition tags on all of them before I can start training again. Not sure how how long this training will take this time around. My normal time is around 38 hours. Training at 768 or even 1024 resolution is possible on a 4090 with what Lodestone told me, it will require me to lower batch size enough to not run out of memory and will lengthen the time per epoch, and I also need train more epochs. Not gonna do a Vpred or zSNR yet, wanna make sure a standard bake looks fine before playing with the extra settings. Unless the training ends up being ridiculously long (5 days or longer), I'm expecting end of the month of first week of October to have a new revision, otherwise I'm gonna be using save states to space out the training and this might drag into middle of october or later. If this training ends up being better and worth extended time duration, I may consider dropping some money for a Lambda Labs H100 session or something to speed things up in the future.
>>21705 honestly looks better than i expected it to given the upscaling process tends to fuck with style loras like that
>>21693 nice, #3 very juicy >>21721 is your TPU sitting idle?
(2.16 MB 1600x1280 catbox_o2gb8a.png)

(2.19 MB 1600x1280 catbox_njhyrn.png)

>>21724 Beware the river gremlin
pastelmix lora is ok for painterly style but it does fuck up anatomy sometimes
>>21727 is that referring to a lora extracted from the pastelmix model?
>>21728 yeah, I think it was posted here a few threads ago, used to live here https://huggingface.co/andite/pastel-mix/blob/main/pastelmix-lora.safetensors but it 404's now
>>21730 The strongest prompt
>>21723 >TPU No, i didnt activate my Google Gibs because 1.5 sharding has a bug on the Jax developers end where there is an unexpected error with the recommended sharding method. He didn’t explain how it was different between the SDXL sharding but that that there was all sorts of other issues impacting XL regardless. He said if the bug ever gets resolved, then activate it, because otherwise a single TPUv3 is the equivalent of 16GB VRAM card, plus PyTorch doesn’t play nice with Google’s stuff.
(1.01 MB 1024x1024 catbox_8bdw73.png)

(1.23 MB 1024x1024 catbox_0by592.png)

Got a bit bored with B64 so here's a couple of Remis in armor with AbyssHellHero
>>21734 >AbyssHellHero The fuck is that? Looks good though.
>>21735 AOM2 mixed with Helltaker and MHA style Loras https://huggingface.co/AIARTCHAN/AbyssHellHero
(299.90 KB 960x1152 catbox_o2v3i8.jpg)

(649.25 KB 1704x2048 11834.jpg)

(4.00 MB 530x640 progress.gif)

this took me 2 hours. muh vidya time eaten by a damn hand.
>>21737 damn how did you fix that hand?
>>21738 flexpaste
>>21736 i'm honestly so sick of people baking loras into models
>>21737 bro why did you not just start out by roughly painting the hand in an image editor before img2img'ing it
(1.95 MB 1280x1600 00467-4103124289.png)

I know I shouldn't be shocked but a friend's friend was really struggling to generate shit so I asked what he's struggling with. He said that half the time he needs to up tag weight to 2 and eye color requires at least 1.3. After some back and forth it turns out he's using some unholy anything-basilmix chinkmix.
>>21743 Send the poor fuck Based64 lol
>>21744 I did, he insisted he's good and that he's switching to "kurogishiki" instead because "it's really flexible" https://huggingface.co/Aotsuyu/Gishiki/blob/main/KuroGishikiv1.1.safetensors I haven't tested it yet but I'm not expecting anything, currently freeing up space to test it
>>21745 >hey you want to try out this new model >no thanks i'm using this model instead the fuck
(1.13 MB 1024x1024 00098-1187345560.png)

first message out of lurk, super new, looking for a nsfw canny model apparently related to abyssorangemixv2? controlnet is still magic to me so idk if it's even necessary to have one trained for hentai or porn or whatever
(1.86 MB 960x1440 catbox_vtnsey.png)

(2.15 MB 960x1440 catbox_9veok8.png)

(1.58 MB 960x1440 catbox_7xb6tf.png)

(1.98 MB 960x1440 catbox_ax19il.png)

chinese takamichi lora off civit is actually pretty good...
>>21745 I hate this guy already
>>21747 Short answer, just download what you want (ctrl+f canny) from here https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/tree/main Long answer, controlnet models are separate from checkpoints. There's no association with AOM2 nor sfw/nsfw versions. Normally I would recommend downloading from the source but they're unpruned (twice the size) and not .safetensors https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
>>21750 nta but do you lose anything by pruning and/or fp16-ing controlnet models?
>>21751 I've always used the pruned versions so I wouldn't really know but they should be basically the same.
(2.11 MB 4000x1081 xyz_grid-0010-3378396779.jpg)

(2.09 MB 4000x1081 xyz_grid-0009-3378396779.jpg)

(2.04 MB 4000x1081 xyz_grid-0017-3727094870.jpg)

(2.14 MB 4000x1081 xyz_grid-0016-3727094870.jpg)

>>21745 Dumb for not accepting advice but at least it's not a terrible model
(2.31 MB 1280x1600 00169-2708194437.png)

>The worst she can say is no
>>21756 on /h/ or somewhere else?
(2.08 MB 1280x1600 00310-3294792521.png)

>>21756 >>21758 I glad I'm not the only one getting kicked out of Subway when they keep getting my order wrong
>>21750 Thank you
(8.05 KB 956x77 cmd_VoZ63TXjPz.png)

never got this error, anyone know what to do?
>>21756 Sire do the needful and use *insert latest FotM Model/UI here* Thanku
>Found the list of banned words for colab >Wanted to post them on 4chan >Error: Your post contains a banned word >Banned for 3 days t-thanks Anyways, here's the list. var b = 'stable-diffusion-webui;stable-diffusion-$blasphemy;/sd-webui/;/sd-webui-;//stablediffusion.vn/wp-content/uploads/Colab/;c3RhYmxlLWRpZmZ1c2lvbi13ZWJ1aQ;d2VidWk;V2ViVUk;c2Qtd2VidWk;SET UP STABLE DIFFUSION WEBUI;/stable-diffusion-$;sd-webui-tunnels;.lower()}-webui.;/EasyGUI;/EasierGUI;//github.com/trivial16/testai/;aHR0cHM6Ly9odWdnaW5nZmFjZS5jby9nbWsxMjMvY29sYWIvcmF3L21haW4vY29sYWIucHk;"w"+"e"+"b"+"u"+"i";huggingface.co/nolanaatama/colab/resolve/main/microsoftexcel;cagliostro-colab-ui;//github.com/Stability-AI/StableSwarmUI;VoiceConversionWebUI;VoiceConversion{reeee}UI;Voice-Conversion-WebUI;github.com/777gt/EVC;Stable-Diffusion-Webui;nolanaatama/microsoftexcel;microsoftexcel-tunnels;a1111-microsoftexcel;{repo_type_lower}-webui;SD-WebUI-TW;blasphemy=base64.b64decode;guebui = base64.b64decode;guebui2 = base64.b64decode;sdw = binascii.unhexlify;AUTOMATIC1111/{voldemort};{sd}-{wui};vorstcy, vcavry = read;zyx-webui-two-shot-main;ddPn08/automatic1111-colab;dagshub.com/camenduru/ui;github.com/camenduru/tunnels;/content/sdd-webui;github.com/DominikDoom/a1111-sd-webui-tagcomplete;blasphemy/webui;stable-diffusion-w\u0435bui;sd_web_ui;Automatic1111 \u043e\u0431\u043d\u043e\u0432\u043b\u0451\u043d \u0434\u043e \u043f\u043e\u0441\u043b\u0435\u0434\u043d\u0435\u0439 \u0432\u0435\u0440\u0441\u0438\u0438 WebUI;-diffusion-webui;/content/{sdwu}/;a1111-sd-webui;github.com/comfyanonymous/ComfyUI'.split(";");
>>21763 Wonder what the banned word was. Anyway, probably shouldn't post on 4cuck anyway, this thing should be as low profile as possible now imo
>>21763 >>>Error: Your post contains a banned word I have never fucking seen that 4chan error before. Anyway fuck 4cuck, just keep the goods here.
(1.56 MB 1024x1536 00083-3606677437.png)

(1.76 MB 1024x1536 00084-672062170.png)

>>21713 Looks decent, but characters have really thick outlines with and without style loras. Is this intended? Personally I prefer thinner outlines.
>>21765 most of the time it's about dox (which is mostly enacted when a janny is found to be a well known spammer)
lmao google approved more tpu gibs to furries, even gave them a v3-32 pod, furries might be the first ones with a usable sdxl (if he decides it's worth continuing that and not just training other shit since sdxl's unet is unnecessarily big) here's the report lodestone sent : https://docs.google.com/document/d/12CTTgfmAG0CqY3lPncKTSs6009R44_HJK7Vu53fmyB4/edit
>>21754 I love this slut
>>21738 >>21741 I did... twice. the closed fist in the beginning was just cursed even with a lot of inpaint + gimp cycles. so then I sketched open hand in gimp and refined that with inpaint. sometimes the rough paint approach works really well and other times I struggle a lot.
Anyone know what sites like patreon/fanbox etc haven't banned AI? And any sites that won't ban for loli?
>>21768 The thought of a furry finetune being the first SDXL finetune seems so fucking hilarious to me. It’s like a slap in the face to Emad’s ESG pandering bullshit kek
>>21772 Patreon Im not aware of any AI content bans, just Loli.
>>21768 Just read that it’s a Preemptible pod, meaning if he is in the middle of training and Google needs that computing power, they can stop his training temporarily if he was in the middle of using it. Even with save states, he could lose hours of training on an interrupt like that. But still, this mother fucker asked for that pod back in June lmao
So 4cuck mods no longer 3day you for flatties, right?
>>21776 probably not, there was a mod in that thread at all times when AI got big now theres probably one that only checks if they get a report or something
>>21776 the only deletion i've seen recently was someone posting furry, as long as your flatties are borderline you'll be safe
Anyone have a collection of concept LORAs?
>>21774 What site did people here use for monetizing AI gens and loli stuff? And I thought Patreon had a lot of non-AI loli artists and stuff?
>>21780 Pixiv requests
>>21781 Doesn't pixiv ban AI monetization now?
>>21782 I believe Fanbox does yea >But only if you get caught
>>21782 >>21783 Fanbox banned AI nearly from the beginning iirc but the pixiv request feature allowed people to make money for a while
What about Gumroad? has anyone tried that?
>>21785 I know that RetroDiffusion guy was selling his model there. Actual art not sure.
>>21786 >>21785 Nevermind, apparently gumroad doesn't allow Loli. Funny how they either ban loli or AI but never both. Ci-en might be an option? I don't think they ban AI?
>>21754 Catbox pls
>>21756 Doing god's work. Also, catbox?
(1.67 MB 960x1320 catbox_9bfcdd.png)

cannot stop prompting goburin while i work on datasets for other things
(2.59 MB 1024x1536 catbox_l0rzdo.png)

banned for being racist to indians again i will never stop >>21789
>>21792 thanks mark
>>21792 i'm proud of you but you need to broaden your horizons (hate more minorities on top of pajeets)
>>21777 I'd imagine some seething faggot would still report it but yeah judging by archives it should be okay >>21778 yeah I don't usually gen them full loli, but flat chest used to be enough for jannies to start angrily dilating.
What the problem with based67 p7 again? I'm doing some testing and it seems like it was better than final version that anon released on civit
>>21796 I didnt know he decided on a final mix already
>>21797 https://civitai.com/models/149664/based67 Well looks like he did. It's alright by itself but has much more prominent style with thick lines, I feel like b67 p7 was considerably more neutral
>>21796 i do hope basedmix anon provides a more neutral variant like p7 for based68
>>21800 Catbox of both please
(1.71 MB 960x1440 catbox_sh9ppv.png)

(1.83 MB 960x1440 catbox_bsjv08.png)

(1.74 MB 960x1440 catbox_pbc9fg.png)

(1.31 MB 960x1320 catbox_xluy73.png)

feral jungle lolis (WIP plant clothing lora posted on /h/ but I'm doing retrains right now)
>>21798 I can release that version onto the huggingface page, knowing that gofile nukes stuff after awhile
(91.55 KB 570x671 Sapphire_first_outfit.png)

>>21802 Now do manga Sapphire.
(1.66 MB 960x1440 catbox_ugg9np.png)

(1.59 MB 960x1440 catbox_00ak3r.png)

(1.50 MB 960x1440 catbox_7i79br.png)

(1.86 MB 960x1320 catbox_3bwipg.png)

leaf clothing version final, bikini/skirt/bra/panties https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/ewRA1BZY >>21805 she actually makes up a decent subset of the training data! while you can't prompt her by name you could probably get good gens by prompting haruka/may with or without a lora
>>21804 Was there a reason you changed from P7 so much? The final release feels really different
(1.44 MB 960x1440 catbox_zat4x3.png)

(1.45 MB 960x1440 catbox_zggoj2.png)

(1.64 MB 1440x960 catbox_9jelc0.png)

(1.81 MB 1440x960 catbox_i8xm7e.png)

i'm having too much fun with this
>>21807 Just wanted 67 to have a particular look with general outputs, just hoping it didn't affect LORA outputs too much
>characters tend to look a little bit younger Civitai dude got what i was going for, the loli models that were shared here are in the merge recipe
>>21810 What civitai dude? Link?
>>21811 nevermind, I didn't know you were talking about based67 Are the loli models good then?
>>21812 Loli A was pretty good Loli diffusion was good too but the AnythingV5 merge was the best imo.
>>21813 Yeah I did find Loli_A doesn't suffer from the thing other models do where the character gets aged up to look more adutl if you write up a longer nsfw prompt but I didn't have time to test out all the lolidiffusion models Other models can do loli just fine but if you go too deep into nsfw or make a decently long prompt then the characters tend to young adult most of the time
(1.35 MB 1024x1536 00062-1853095860.png)

(1.14 MB 1024x1536 00029-771805092.png)

>>21809 It fits some styles really well but with others not so much imo. I will definitely use it when I want flat shading/thick outlines. Will look into P7 more because it feels like improved based64 so far
(1.13 MB 960x1320 catbox_iyf1va.png)

(1.32 MB 960x1440 catbox_7opegt.png)

(1.59 MB 960x1440 catbox_y6uqr9.png)

(1.44 MB 960x1440 catbox_tgc2z9.png)

So I decided to switch over to b67-v1 in the middle of a style test and it is basically just b66 but better for me in terms of flat colors and hard outlines. So I'll probably be switching back and forth between b64 and b67 depending on what style I'm going for.
(2.24 MB 1024x1536 catbox_ksf18j.png)

(2.31 MB 1024x1536 catbox_96ymwb.png)

(2.29 MB 1024x1536 catbox_9javxy.png)

(2.22 MB 1024x1536 catbox_vjvzse.png)

>>21816 I quite like what B67-v1 looks like with my ponsuke+zankuro combo but I'm gonna keep tossing different prompts and style mixes at it.
going through pov blowjob loras and it's funny how easily you can tell which ones were trained on mizumizuni
(1.86 MB 1024x1536 catbox_b7q3o7.png)

kino lora also interesting new extension: https://github.com/ljleb/sd-webui-freeu
>>21819 how does the unet= and te= stuff work?
wait, where was P7 uploaded?
(3.20 MB 2560x1119 xyz_grid-0039-1973102318.png)

>>21820 Pretty self explanatory, it lets you control the weight of unet and text encoder separately. It also makes me think I should go back to training unet-only loras There's more stuff loractl can do like weight over time/step https://github.com/cheald/sd-webui-loractl
(117.98 KB 640x960 catbox_2327vm.png)

(467.11 KB 768x1088 catbox_3rtksk.png)

(440.93 KB 640x960 catbox_fzkn5z.png)

(759.74 KB 768x1088 catbox_qa137u.png)

high denoise img2img is really stupidly powerful for lighting control
eh heh heh heh heh
>>21825 UOOOOOOOOOOOOOOOOHHHHHHHHH LOLIPUSSSYYYYYYYYYY
>>21824 How do you do that, is it just img2img mask or run through controlnet?
>>21827 Not him but put that image into Img2Img and then txt2img as you normally would. High denoise is required.
>>21827 yeah 0.7-0.9 denoise in img2img while prompting normally, using simple images as input effectively allows you to control scene lighting
>>21825 kek gimp/photoshop settings?
>>21830 paint.net, effects -> color zoom blur
>>21831 >paint.net absolute madman
(1.22 MB 1024x1536 catbox_rp6j7p.png)

(1.86 MB 1408x1024 catbox_mh53dy.png)

(1.46 MB 1024x1536 catbox_xwei5t.png)

(1.31 MB 1024x1536 catbox_i5gqip.png)

I like based67 v1.0, it's at the very least a drop-in replacement of 7th anime, the lora compatibility is top tier.
>>21834 funny you say that because that lolified offshot of it "3A" was used in the recipe, someone shared that model long ago
>>21716 >>21835 you gonna post the mix recipe?
I like the backgrounds in Based67_v10 a lot. Not sure I'll switch from Based64v3 tho https://files.catbox.moe/fxckap.jpg
>>21836 yeah, I'll post V1's once I copy and paste the necessary metadata I saved
>>21838 P1: Add difference: Alpha 1 = AbyssOrangeMix2_hard/futanariFactor_alphaV10/Final-Pruned/calcmode": "trainDifference P2: Weight Sum: 0.16 = 7thPastelineBfp16-1787-2385-0006/b67-og-p1.fp16 P3: Weight Sum: 0.18 = loli_A/b67-og-p2.fp16 P4: Weight Sum 0.45 = b67-og-p3.fp16/Silicon29 P5: Weight Sum: 0.35 = loliDiffusionV0.8.7_AnyV5_0.8M7.8_VAE_PURIFIED/b67-og-p4.fp16 p6: Weight Sum: 0.24 = b67-og-p5.fp16/ cuteyukimixAdorable_v40 p7: Weight Sum: 0.22 = b67-og-67.fp16/3A p8:Weight Sum: 0.18 = feiwu_v10Slightflat/b67-og-p7.fp16 p9: Weight Sum: 0.22 : b67-og-p8.fp16/darkjunglemix_V2InkFix
>>21839 Was this all done manually or through an Auto?
>>21840 supermerger, I would say it was done manually with an x/y chart and the genning options
>>21839 thanks anon
>>21818 means those anons had godly taste
(1.70 MB 960x1440 catbox_joujym.png)

(1.45 MB 960x1440 catbox_n6it04.png)

(1.48 MB 928x1392 catbox_wxj0ao.png)

(1.53 MB 928x1392 catbox_9c4rgb.png)

>>21834 Which Alpaca Suri LoRA is that? Also, surprise standalone Kiss-Shot LoRA. I know there's like 4 of these already including my own but I was determined to make a definitive edition and I'm very happy with it. Mite give the same treatment to the other Kizu Shinobu versions.
>>21839 Any VAE you'd reccomend specifically for this? There's the NAI Vae, the WD vae, the blessed VAE and even the loli model guy made his own VAE too.
>>21846 >the loli model guy made his own VAE too is it some NAI vae mod like blessed and clear?
>>21847 https://huggingface.co/JosefJilek/loliDiffusion/tree/main IDK since i'm not a model dude but you can see a few VAEs on his hf repo Also, I noticed that B67 is using lolidifussion_V0.8, did you try the updated 0.10 models here? >>21632 He originally uploaded these updated models to Pixai but some anon reuploaded them to mega
>>21816 there's a suika lora?
>>21849 i made one and posted it last thread cuz there was a civitai one but it was your standard fare deep fried chinese shit where you couldn't change her outfit https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/P9ZA1L7D
>>21522 Could someone point me in the right direction for what to do with this? I don't hang out on /t/ enough to know what to do with this. Even if I decode, it still looks like gibberish to me >>21551 This is what I feel like now
>>21851 nvm. I am an idiot and will repent for my sins
>>21850 yeah those deep fried chink models on civitai are completely useless based on the model and which 2hu you wanna gen
(1.99 MB 1024x1648 catbox_um7ool.png)

[pencil skirt|overalls] thank you for your attention
>>21844 It's the alpaca suri lora in gayshit, is there any other alapca lora you are aware of?
Is supermerger now a broken piece of shit? I'm trying to use this goddamned thing but in addition to bugs when trying to use it (loaded merges not unloading, forcing loading of the first checkpoint in my folder, ignoring the first checkpoint switch) my merges work for like 10 generations before returning NaN errors Is there an alternative?
>>21856 The current version with the SDXL additions is fucked
>>21856 just roll back to version 12
>>21857 Which commit do I revert to
(1.60 MB 1440x960 catbox_jos2vs.png)

>>21855 no, just wanted to make sure
(7.80 MB 4480x1626 tmpi__bss5d.png)

>>21766 Were you using waifu diffusion vae kl-f8-anime2? It's known to cause hard black outlines with certain models
>>21858 version 12 does not work with 1.6.0
>>21763 Does anyone managed to get sd to work in colab yet? Not asking for a notebook but just too see if someone bypassed the filter.
>>21864 I'm also interested because I wanna set up my own collab.
The current supermerger is broken to hell, but v15 does not work on my webui; supposedly because 1.6.0 is too new. So there's no version of supermerger that actually works with 1.6.0?
>>21866 Guess not, if you really need to use supermerger you will need to rollback webui.
>>21867 i am so very tired
For anyone that might have the misfortune of running into the same issue, I have switched to commit 72a58f3e7ce900961ee4dbebbf138f5ceb16588b, it successfully generates merges in 1.6.0, and toolkit does not report them as having several broken sections or any junk data I think I might have finally done it, I've been at this shit for hours because the merges worked fine until they suddenly didn't I fucking hate python devs
>>21869 Based And fuck Python
Made a Kereno LoRA It's awful for porn, the dataset is almost entirely censored and that carries into the model. Tagging out with double weights pretty much doesn't work. What does work is the puffy pussy lora About 40% of the dataset is black and white, but it seems to do just fine (maybe unless you try black hair). This is probably the dirtiest lora I've actually posted The oppai loli are not included in the dataset, but some of the toddlers are I wonder if he releases clean, uncensored stuff anywhere
>>21862 Not him but I am using nai vae and based67's bias towards thick lines and flat shading is still very apparent even with style loras. I actually wanted a model like that for some styles but I'm probably gonna settle on p7 for general use
>(from above, male pov, fisheye:1.2) thank you b67 you are my greatest ally
>>21874 lmao
(1.17 MB 1024x1536 catbox_476qo8.png)

(1.14 MB 1024x1536 catbox_jpuzy5.png)

(1.29 MB 1024x1536 catbox_syq46p.png)

>>21871 How did you do it? I once tried to compose a dataset but every single work by kereno is covered by absurd amount of text
>>21877 :) I just did it any way I edited out text from images where it was easy (like 10 images) and just let jesus take the wheel, hence dirtiest lora I've ever posted
>>21864 >>21865 No google is killing colab even for Pro users You can buy google PRO+ though :)
(1.07 MB 1024x1536 00314-1421868295.png)

(1.43 MB 1024x1536 00341-64306986.png)

(1.58 MB 1024x1536 00049-1208603629.png)

(1.64 MB 1024x1536 00339-2101310268.png)

>>21879 Last time I checked pro+ was worse than the free tier
(1.49 MB 1024x1536 00006-1106311147.png)

(1.68 MB 1024x1536 00041-2489771426.png)

(1.62 MB 1024x1536 00043-1324422112.png)

(1.79 MB 1024x1536 00163-1122448952.png)

perfect little budding mammaries peroperoperoperoperoperoperoperoperoperoperoperopero chupa chupa slurpslsrlrurlsusrlurrpslrurp
>>21883 Cute
(1.64 MB 1280x1600 catbox_h2df0c.png)

(1.56 MB 1280x1600 catbox_uueso7.png)

>>21871 Very nice
(1.78 MB 1280x1600 catbox_jm8nil.png)

>>21885 Why do I always get the good ones after I upload
First ever LoRA, styled on 100 odd color drawings done by Tanabe Kyou, some NSFW and some SFW. Please send me any feedback, images, suggestions, etc; I'd like this to be decent at sex too but so hard to find color drawings of that by him. Trained in colab too, animefull (NAI)+nai vae, 6 epochs, 1000 steps. Idk what the hell I'm doing but it looks like his style I guess? sweet https://pixeldrain.com/u/5KMMRYV2
>>21888 Oops, here are a couple of early previews on 20 steps >>21833 Wanna share workflow or controlnet model used?
>watch AI update video >guy says LOHA and LoCONS are better than LORA kek
>>21890 loha is a fucking meme, same with lokr, but locons do learn more than loras and I find that they tend to be better for getting a lower filesize with the same level of detail as higher dim loras for artstyles. locon learns style too well though so they aren't as good for characters since they will style bleed all over the place
(1.01 MB 640x960 00227-0.png)

>>21889 I don't know if you want to share your training settings or dataset, but either way you've got some work to do before you get decent results. Step 1 is lowering your dim size
>>21892 dataset is gelbooru "-monochrome tanabe kyou", saving clean images and images w/ minor text which i edit out. and training is https://imgur.com/a/mrTteIt I'm seriously a noob, but want to get a few of my fav artists/characters done
>>21894 BELLY
(2.00 MB 1280x1600 catbox_za1cl6.png)

(1.99 MB 1280x1600 catbox_uwa33v.png)

(2.01 MB 1280x1600 catbox_qolnf7.png)

(2.01 MB 1280x1600 catbox_5y3ee1.png)

B46 vs B67 P7 vs B67 Pβ vs B67 V10. I think I underestimated P7, it may be my new go-to.
>>21896 I don't know what B46 is, meant B64
>>21893 Well you did good on training on NAI instead of Anything v3 which that guide suggests. Definitely suggest using auto tagging with wd1.4 swinv2. You could probably use something like this https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags but batch tagging is very nice. For training settings, I think what you have already can work but bring dim size down to 32 and alpha to 16. 1000 steps for batch size 6 is a lot, bring own your batch size so you have more usable epochs. You could also just try batch size 1 and keep your total steps to between 1000-2000, I usually just have the default 1600 step limit.
(1.51 MB 960x1440 catbox_4ca0wg.png)

(1.64 MB 1440x960 catbox_1hl7rr.png)

(1.66 MB 1440x960 catbox_7jrnqz.png)

(1.28 MB 960x1320 catbox_atq2jc.png)

>>21894 I see the river maidens are becoming domesticated... (though I guess they always were since they're maids at some point)
(1.69 MB 1024x1536 00030-3322693326.png)

(1.62 MB 1024x1536 00138-3029848834.png)

(1.64 MB 1024x1536 00081-2750892298.png)

(1.69 MB 1024x1536 00103-2989128464.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added henreader >>21606 i didn't see this until after i baked it, oh well. the more the merrier
(1.39 MB 1024x1536 00008-699153779.png)

(1.56 MB 1024x1536 00484-2989128467.png)

(1.75 MB 1024x1536 00005-3322693325.png)

(1.77 MB 1024x1536 00003-1233700346.png)

Anyone wanna take a crack at henrybird or airandou? 2 greats who I've seen nothing of. If not give me a few weeks I guess.
>>21900 Was just gonna ask, but yeah it's always nice to have choice. I always like the gens you include with your loras as well >>21902 Airandou would be difficult, not a lot of color source material that's not scans. I can definitely do henrybird though, maybe within the next hour or two
(742.49 KB 2339x3500 fcd9389cb165356a220ea8d0d4289e99.jpg)

(2.94 MB 1095x1656 E-CZy7ZXsAUTjz-.png)

>suddenly remembered reading/watching Keroro Gunso when I was younger >just now finding out the mangaka went on to make Kemono Friends how the fuck am I just now realizing this? I feel so stupid. Anyway, has anyone ever tried his style yet? Always found it super moe.
>>21903 >I always like the gens you include with your loras as well it's a nice way to prevent getting burnt out from training and also occasionally things wrong with the lora lol. fixing up datasets gets tiresome
>occasionally things wrong occasionally finding things wrong*
>>21904 There are two "Official Kemono Friends Style" LoRAs I've seen on Civitai which are effectively Yoshizaki Mine LoRAs since that's his style, but they're pretty hit or miss. A LoRA trained on a wider sampling of his art? Haven't seen one.
(2.44 MB 1280x1600 00365-1351067279.png)

>>21904 Love the style and watched a lot of Keroro when I was younger. I just never understood why I kept randomly seeing art of Alisa Southerncross on Pixiv.
>Hll anon dropped a 5.5 premerged mix test >said if he trains HLL from scratch it will take 5 days grim https://boards.4channel.org/vt/thread/59147410#p59158611 https://civitai.com/models/150705
>>21909 I might look into it for merging, I already know anons here would want something more akin to b67-p7 due to it not being heavily stylized but I'm wondering how much this model with vpred and zsnr would affect the recipe result.
(1.76 MB 1280x1600 catbox_sk0xmo.png)

>>21890 Experimenting a lot this past month or two moving beyond Lora, I think lora still works fine. I've been liking LoCON more and more but I think loha / lokr are pure memes, and lycoris is just worse but works.
>>21911 Seems very clean for my single subject tests so far, and 6MB! Thx mate
(2.20 MB 1280x1600 00155-1062049327.png)

>>21909 I'm liking this premerged mix
I just realized that eye shapes aren't really tagged that well on booru. I fucking hate this.
I have done it I spent hours decensoring several pieces by Kereno, and added some uncensored sketches that weren't included previously The censors have been removed https://mega.nz/folder/dfBQDDBJ#3RLMrU3gZmO6uj167o-YZg/file/sCxUTbLK I guess next is to look into actually curating the set better and see if that makes those little mounds look better. I've also had a little success in cleaning a couple of the text-heavy pieces. Any way, I've been awake for about 36 hours and I think it's time to sleep
I want a klf8 blessed vae mix, but the vae merging script I found couldn't do it
>>21916 Nice Now I have to grab some pacifier lora, diaper lora and giving birth lora for the authentic kereno experience
>>21895 Very erotic. >>21899 They have been a lot of different things at this point, maids, monster hunters, office workers, hooter waitresses, construction workers, police officers and more. >>21900 Cute style, thanks.
>>21902 >>21903 Airandou is 100% possible because I trained a Dreambooth of their style last year. Just put monochrome in the negative, it's not that hard.
This is a yousaytwosign lora I made with a dataset of just his black and white art and manga pages. The colors are dulled but are applied just fine
I saw some posts on /h/ with animating shit on wallpaper engine and Live2D. I think you could probably make some money if you could AI Gen a base "model" and then rig it.
>>21923 Very nice
>>21923 catbox please?
I’ve been up all night cleaning up these KnK images. I got 100 more “scenes” to clean up before I can start autotagging, cycling the shit into Hydrus, and do image composition tag manual review. I should be done with the cleaning up by tonight or tomorrow depending on whether I pass out in the next couple hours. Rest of the week will be manual tagging. I may start training next Monday expecting it to go the whole week, might as well take a break from all this autismo work Im doing while I’m at it because I’ve been burning all my free time working on this finetune and I’m mentally exhausted.
>>21926 Do you have a script or anything for batch renaming the caption files when importing/exporting from hydrus? I've just been using powerrename from MS powertools but a one click solution would be nice and I don't feel like brushing up on python at the moment.
>>21927 The SmilingWolf WD tagger keeps the name but with a the .txt so I don’t need to do any renaming, just import with sidecar file. The BLIP captioned in Kohya also saves the name original filename but as a .JSON but I think that one is outdated plus Kohya dumps the captions so I haven’t looked into an alternative for boomer captioning.
(6.44 KB 512x640 darkblue640.png)

>>21924 Thank you >>21925 https://files.catbox.moe/j8repu.png It's an img2img from this. I changed lora weights while upscaling/inpainting but not much
>>21928 (Me) And then when exporting, the filenames should be the same per default settings as it’s just taking the file’s hash and uses that as the name for both the image and your corresponding txt/json file
>>21929 These solid color/gradient image Img2Img broke my finetune… at least I know what to do to fix it since my older versions work better with these type of gens than my current one. It makes me sad because they look very cool to easily manipulate the lighting or add cool effects.
>>21928 huh, I used the webui tagger extension and since it creates .txt files I have to rename them since hydrus REQUIRES the captions to be [fullfilename].txt so image.jpg.txt or image.png.txt
>>21879 Last time I checked pro+ was worse than the free tier >>21932 >hydrus REQUIRES the captions to be [fullfilename].txt so image.jpg.txt or image.png.txt there's an option specifically to avoid that on the import menu
>>21933 fuck it kept the last message
>>21932 I have never had that issue before where it saves the extensions like that. Which webui tagger are you using? I was using this extensions: https://github.com/picobyte/stable-diffusion-webui-wd14-tagger And in the settings for file output format, it should be >[name].[output_extension] I have also been experimenting with this: https://rentry.org/ckmlai#ensemblefederated-wd-taggers My only issue is that the unwanted tag filter doesn't work, or at least it doesn't explain how it should be correctly formatted.
>>21931 >at least I know what to do to fix it since my older versions work better with these type of gens than my current one Now I wonder what exactly could have broken img2img interactions?
>>21936 I'm thinking the added color influenced certain parts of my dataset in an abnormal way, but I've been staring at this shit for hours and days straight I can kind of tell which stack of images is probably the influence so I can go back and double check if I fucked up something. TL;DR, fuck anime screencaps.
(1.06 MB 3072x2678 catbox_gfrxu8.jpg)

(1.05 MB 3072x2678 catbox_fpmpf6.jpg)

>>21933 >>21935 oh fuck my hydrus was 60 versions out of date so it didn't have sidecars and thus didn't have the ability to customize the filepath for captions, thanks for making me realize i'm retarded also have some tests of a potential upcoming locon update (if i can scrape by a little more training data)
>>21938 Must, rape, island cat girls
>>21938 You can also use the sd-scripts wd tagger https://github.com/kohya-ss/sd-scripts/blob/main/finetune/tag_images_by_wd14_tagger.py I've modified this to output all txt files into a tags folder in the input folder, and then just do the sidecar import thing.
>>21935 >>21940 and I assume the undesired tags function of this script works
and here's the bash script I use to actually call the python script, which calls for the moat model (Which is probably recommended? It's the default on the webui tagger now) and includes a basic argument to turn on recursion https://pastebin.com/Mhzt8DS5
(1.11 MB 1024x1024 inpainted.png)

>>21927 ask chatgpt to write you a powershell/python script.
>>21929 you mastered darkness, nice!
>>21945 he was born in it, molded by it.
(1.69 MB 960x1440 catbox_3ccd8g.png)

(1.77 MB 960x1440 catbox_cqa13k.png)

(1.90 MB 960x1440 catbox_rryxmz.png)

(1.69 MB 960x1440 catbox_m39zab.png)

added an alt version of plant clothing locon that can prompt coconut bras, not considering it a straight upgrade since it's a bit weaker overall https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/XopDEAAA
>>21947 based
Has anyone ever made a gaping pussy LORA? I sometimes want to prompt characters with gaping wide pussies but it feels nearly impossible to prompt without a LORA or with any of the existing ones that I know of
(2.14 MB 1024x3072 catbox_8nzajk.png)

(2.39 MB 1024x3072 catbox_0nzqsy.png)

(1.49 MB 768x2304 catbox_nx8izq.png)

>>21949 there is actually a gape model mixed into aom2/3 hard, though if you want like speculum tier gape there's several LoRAs for it on civitai, just search for gape
>>21950 any good lora recomendations from civitai?
I like how the kereno lora works with ((3D)) It seems the tag counteracts many symptoms of what are likely the consequences of the substantial amount of manga art that was included in the dataset
(1.17 MB 768x1920 catbox_7b9qz7.png)

(1.10 MB 768x1920 catbox_i2p0sr.png)

>>21951 i have no idea if any of the civitai ones are good, i specifically go out of my way to negative gape lel
I don't browse /g/ much anymore, just skim it to see if there is anything being talked about (like leak anon) and bail. But someone posted this and wanted to confirmation if it was bullshit or not since we were just talking about SuperMerger.
Isn't what he's talking about mainly to do with the text encoder? From my just recent, limited experience with supermerger (and the block merging that I was doing), only the base alpha refers to the text encoder in supermerger
>upscale and inpaint pic, like how result looks >look closer and realize that anatomy is kinda fucked Feels bad
>>21957 Nice, the mixed-reality thing hll-anon released had dark sushi in it so having a standalone makes things better for merging
>>21960 cute
>>21960 The LoRA I'm using can be used to cover the nipples as well.
>>21963 Correct.
>>21954 sounds like bs. doing hard flips (0 and 1) from model A to model B in different layers just seems like it would confuse the fuck out of the result. I've only dabbled with merging though.
>>21952 Shion lora?
>>21947 there's a Mamizou lora?
>>21967 I used this one for my previews but it's dogshit and I'm already working on a dataset to make my own: https://civitai.com/models/122452/futatsuiwa-mamizou-touhou-project
(3.35 MB 1280x1600 00020-3832929719.png)

>>21968 Ah yes tanuki girls
>>21969 Mamizou isn't just a tanuki girl, she's a tanuki HAG. Immediate impregnation is required.
here's a somewhat failed attempt of me using the age slider lora in combination with the kereno style to try to get into some forbidden territory
who's an artist that has really good anuses?
>>21972 cute, love the eyes
>>21973 minakami, pochincoff, forastero
has nothing to do with AI but sankakuck cumplex introduced anti-adblock popups a few days ago and now it will outright "flag" you and redirect you until you let the kike get his ad revenue https://github.com/Poofless321/chan-Sankaku-adblock
>>21976 haven't been using this shithole of a website for years, have I been missing out?
>>21977 not really. it always seemed to have the most pictures for any artist tag i checked but they've been deliberately making the site shittier and shittier to use for years to try to squeeze some money out of it somehow.
>>21977 >>21978 From my experience it seems to get new shit uploaded much faster than other boorus. VS other boorus (accounting for the absolute fucking trash search function changes): No loli/shota paywall (danbooru), can search with more than 2 tags if you know what you're doing (danbooru), 10x more leaked content from paysites (some of which isn't on kemono/yande.re) (both danbooru and gelbooru) The downsides: The "recent" tag rating system and # limitation, scraper-unfriendly, page limit, randomly paywalled posts (some of which are ironically paysite posts), worse than average tagging, influx of extremely shitty deviantart-level furry porn to protest the site changes, a few prominent commenters should ABSOLUTELY have their HDDs searched for pizza, even MORE limitations for anonymous/logged out users (you need an account to turn on dark mode...), owner cucked and renamed the rape tags and such because muh payment processors, etc (then he made a big cope post about how no he totally wasn't owned and muh freeze peach this and that) It was still tolerable before the tag rating system and it's still useful for paid content now but the kike who manages it really wants to run it into the ground
>>21979 I see. That thing where the images weren't displaying due to some cookie shit and the popups drove me off the sankaku very quickly. Gelbooru might be missing some features and may be slower but it felt really nice to use regardless and doesn't seem to have any of the kike shit
>>21976 >>21977 The have some archived artworks that have been uploaded there and not anywhere else iirc, at some point someone needs to scrape that site and free us from this jewery cause its getting worse (And will get worse)
>git pull from pre-gradio update commit >no errors besides a few extensions that need to be reinstalled feels good
>>21979 >randomly paywalled posts the paywall is a ruse to trick people into subscribing. if you have a plus account, those "paywalled" posts just disappear completely from search results. the way it works is that any artist that is on their blacklist just has their posts hidden behind the "paywall" or hidden for plus users. fuck that site.
So from what I understand, old latent upscale used to noise the image up and result in extra details (best case) but require denoise >0.55. If I 2x upscale with Remacri and denoise at 0.55, what "extra noise" multiplier would get me an equivalent effect to the latent upscaler? I find that anything >0.2 noise is just unusable. Most of the time I only add 0.02 or 0.04. 18.2MB catbox: https://files.catbox.moe/3shgff.jpg
>>21984 My opinion is heavily biased because I implemented the feature and this script on gist but I've personally found extra noise to shine the most when it's applied to specific parts of the image. https://gist.github.com/catboxanon/69ce64e0389fa803d26dc59bb444af53 It's very sensitive as I've explained before, so it frying shit at a value of >0.2 makes sense. When it's masked using that linked script you can push it a bit further but then the denoising process can potentially hallucinate what the actual details should be, like generating some random new pattern that it interpreted should be there due to the noise. The only thing so far I've seen it consistently work well for is anything wet and anything a bit more predictable in texture, like hair.
>>21985 I think it works pretty well to increase "traditional media" effect
(642.27 KB 1024x1536 catbox_ns4557.jpg)

(454.29 KB 1024x1536 catbox_grsm1z.jpg)

(587.80 KB 1024x1536 catbox_86yppi.jpg)

(516.00 KB 1024x1536 catbox_l2dllq.jpg)

>>21985 Thanks. I'm still trying to understand that script, maybe I just need to run it and see what happens. But I think it's saying to inpaint with extra noise only in specific parts, and then there's some blurring to make the extra noise taper off. Yeah, also seen the hair texture is much improved with extra noise. Lost a few hours with lora mixes, will have to try the script another time.
<lora:artist-muk monsiuer8:1@0.35,0@0.55:hr=0> does a great job of adding more interesting poses and composition without muking up the picture too much as long as you flush it with another lora >>21986 yeah, anything where there's some expected noise like graininess or pseudo-brush strokes or whatever generally takes to some extra noise pretty well. i've pushed it as high as 0.3 on some lora mixes before things started to come apart.
>>21981 That too, yeah. Stuff that isn't even on yande.re >>21983 I didn't know that, I thought it just paywalled random posts. They literally just fucking gaslight you with bait?
>>21989 >They literally just fucking gaslight you with bait? A lot of websites do this funny enough.
I haven't used Latent in like 6 months, perhaps I should give it a proper try again.
>>21987 Put it in the scripts folder and then there'll be a new accordion to use in txt2img. It's functionality the same as an extension, I just don't want to bother with a whole repo for people to report bugs and then I have to maintain it.
>>21989 yeah, I subbed a few years back thinking I was getting the paid posts. only realised the "paid posts" were fake when someone in the forum pointed it out. I think the blocked_artists tag is their list of artists that have DMCA'd them.
(515.77 KB 1024x1536 catbox_3h7qo2.jpg)

(383.33 KB 1024x1536 catbox_crrujk.jpg)

(581.34 KB 1024x1536 catbox_1r2g05.jpg)

(507.39 KB 1024x1536 catbox_mt8f69.jpg)

>>21992 Will try it, thanks. >>21991 Cute. The steam is a nice touch. >>21988 ><lora:artist-muk monsiuer8:1@0.35,0@0.55:hr=0> Some anon mentioned that the documented format for loractl was fucked. Is this the correct way to set hires pass to 0? muk is great, but I'll have to mess with it a bit more in this mix of loras.
>>21994 yeah that was me, some nice anon submitted a request to fix the documentation after i whined about it. colon is correct.
If vpred and zSNR start getting implemented more and more, should LoRAs also be trained with those parameters for better fitting?
>>21957 Nice, currently testing using a base model with: + 1.0 x ( hll5.5-vpred - nai ) AddDiff, TrainDiff + 0.2 x (EasyFluffV10-FE - sd1_5) AddDiff, TrainDiff Base can be anything (Pic is Sudachi) EasyFluff is so strong that anything over 0.25 tends to overpower the base on merge.
>>21996 vpred requires you to train loras with v_parameterization enabled but otherwise the process is the same as far as I know
>>21992 So if I understood this right, you gen once, put the image into the box, mask the part where you want extra noise, then gen again with hires fix and extra noise?
>>21999 >light theme what is wrong with you you fucking psychopath
>>22000 If my eyes aren't bleeding, am I really seeing?
did the based67 link move? the folder with the old link is empty
>>22002 gofile nukes files after a week or a few days of no downloads, I know people still want P7 so just wait for a huggingface upload, the hashing usually takes a long while.
(2.62 MB 1536x1792 biglunacheer2.png)

>>22003 sounds good, will do
>>22004 the fat tits are for aki, not ruka :(
(582.17 KB 1024x1280 catbox_lya82g.png)

>>21929 thanks, another prompt saved
https://civitai.com/articles/2345 any way to replicate this with a1111? I guess multi controlnet? I haven't used ip adapter before
(615.08 KB 1024x1536 catbox_gbi1pa.jpg)

(644.93 KB 1024x1536 catbox_vh042j.jpg)

dumb prompts sometimes yield good gens
I've been a bit out of the loop, as I was burned out for a while. Any new interesting models or other news?
(1.99 MB 1024x1536 catbox_x6tet8.png)

>>22009 Based67 dropped.
>>22009 Based67 HLL5.5 Furry/Vpred models
>>22010 >>22011 Seems like there multiple version of Based67? Gotta look into it Thanks
>>22012 >>22009 Noice, catbox of both pls?
>>22012 OA and v1 are both ToT and P7 kinda looks like b64.
>spend an hour winnowing down the best toph lora >don't know which one it is because the civit hashes don't match what a1111 provides what's up with that
>>22015 because civit is highly regarded and hashes the entire file instead of excluding the metadata even though most UIs stopped doing it that way months ago
Anons where are you, I summon thee, to provide catbox of this images I beg u
I'm here just doing dataset cleaning, I'm not any of the anons that made those unfortunately
>>22018 2nd is on aibooru with metadata https://aibooru.online/posts/28163
(699.97 KB 640x960 catbox_8h31uv.png)

Testing Chikuwa
(1.48 MB 1024x1536 catbox_kdlzeu.png)

(1.83 MB 1024x1536 catbox_uexlro.png)

>>22018 here you go I'm warning you tho, these were made with older versions of the takorin and fizrot LoRAs
(122.73 KB 219x182 whyMelt.gif)

>>22022 I think I pretty much peaked with the second one and it's not even a Shoebill pic
>>22022 Dang, is there a way to find them in older versions?
(124.77 KB 1280x720 i gotchu fam.mp4)

>>22023 >belly erotic So the first one for me.
>>22025 Much obliged, partner
>>22013 https://files.catbox.moe/8uu1kx.png https://files.catbox.moe/q6pcmo.png >>22014 I only found one from civit any other links for the variants?
I hate the finetune life >I hate the finetune life I hate the finetune life >I hate the finetune life I hate the finetune life >I hate the finetune life I need a break from this thing
>>22030 >Me on v223 of a lora Yes.
>>21900 Can you do kuroboshi kohaku
(1.31 MB 944x1416 39902-775322978.png)

>>22033 thanks anon
Has anyone made a lora that boosts the ability to have non-black pubic hair?
[Sad News] Unet is pretty good at not retaining even walls of text over images, but not so much when it comes to speech bubbles. This Atahuta lora is gonna need a lot of work.
(1.50 MB 1280x1600 catbox_z5o0nw.png)

Man these vpred zsnr models sure are something
>>22039 At least provide a catbox of one of them pls
(195.66 KB 1024x1024 OIG.jpg)

(163.90 KB 1024x1024 OIG.0Sk.jpg)

It's grim. We are stuck here tweaking techs 3 generations behind cutting edge because of censorship.
>>22008 what is this "template" (power nymph) thing you have in a1111? and is there a comfy similar tool lol
(124.74 KB 1024x1024 OIG..jpg)

>>22043 Yeah. Dunno if this shit can actually run on consumer hardware even if it leaks (it won't of course).
Whats a good place to offload around 1TB worth of models online? Some of these I can't find anymore, others I don't even know what the fuck they are or if they are even good. I'm just trying to clean house right now and keep only the shit I need a place to redownload stuff that could still be useful such as old finetunes/dreambooths.
>>22046 huggingface may be not the most reliable but a good backup option. I got around 100gb of models and probably around 200gb of loras dumped there. I dunno about whole 1tb but you can probably do it with multiple accounts if it actually restricts amount of shit you can upload there
>>22047 As long as they don't one day nuke the page out of the blue I'm ok with dumping even just a chunk.
>>22043 >>22045 what are you two talking about?
>>22048 Don't think this happened before. Should be okay unless you upload cunny pics there but ultimately this isn't something you can rely on. >>22049 Dalle 3 released. You can check 4cuck /g/ catalog
>>22049 Dall-e 3?
>>22050 I'm waiting for my NAS I preordered (it got delayed over the summer) to come in so once it comes in I can go back to self hosting all my shit. I don't even have space to fit another drive on my computer lol
>>22050 >>22051 >Dall-e 3 ooooh well, we might not have the technology of a company worth like 90 billion or whatever but at least we aren't kikes (this isn't sarcasm)
>>22043 Imagen had about a year and two months head-start on SDXL. It's only a matter of time before ClosedAI's parlor tricks get figured out.
>Checking through my saved bookmarks of rentry pages >Still have the Gape.ckpt rentry page >Last Edit: 11 Aug 2023 12:51 UTC holy kek the guy is still alive
https://rentry.org/gapemodel I managed to go down a bit of a rabbit hole with Gape. Apparently the guy spent 350 bucks renting an A40 to work on Gape. Gape is trained on Gelbooru instead of Danbooru and the images were from Sankaku so it has Sanakaku tags, this could explain some of the weirdness with prompting on stuff like AOM Hard. >"prefer gelbooru (NAI) tags as the main ones were mapped when training on sankaku data [...]" >"tags were only mapped starting at epoch 50 (oops), so it will understand sankaku tags somewhat" Also not sure what he means by >technically only trained for 45 epochs since it started at epoch 15 But basically, 32k images trained over 60(?) epochs. Also recommended merging the model for better stability. Wonder whats up with this situation where you have to merge a finetune with another model to make stuff stable.
>>22056 When r34 model came out (this was months back) it was pure dogshit on its own and you had to merge it with something else to get decent results, same thing with waifu diffusion.
Someone got a Lora for on model Iris from Pokemon? Friend birthday comming soon and feel like prompting some stuff for him
Uploading to Huggingface is fucking slow as shit
>>22057 weren't people really really against mixing in WD?
>>22060 into better models? yeah it made WD better but the model it was being mixed in worse.
Going through all my saved models from all the way back in the day, and using the archives to go back and search for their origin, a lot of them really were just Any+NAI shitmixes with some obscure flavor dreambooth shoved in somewhere along the way but somehow gave it soul.
>Finished moving everything I wanted to HuggingFace >Saved Huggingface, Civitai, and /h/ /vtai/ /g/ Rentry/download links of models I had >saved old 4chan mix recipes documents if they weren't on any of the cookbooks >deleted the rest of the models >freed up 1.3TB feels good
>>22044 no template, that's literally the prompt. same model, prompt and seed and you should get the same gen.
jesus fucking christ just looking at shiori's eyes sends my dick into overdrive
>>22065 kek you reminded me of picrel
>>22066 fucking NEED
>>22067 >AI clone her voice >make her say it I know /vt/ frowns upon but as long as its for your ears only... YOU MUST
>>22068 >I know /vt/ frowns upon You couldn't pay me to care about board ""culture"" so it's absolutely not an obstacle to me
>>22068 forgot to attach a selfie >>22069
https://files.catbox.moe/3jql6p.png >Grid Filesize Yuge Got around to finally checking out all the versions of Anything5 Ink (aka basically AnyV3 definitive per the creator) and pulled out the ol reliable NotPriestess anon's prompt that was made on Anyv3 as a test to compare all the new variations. If basically any of these versions are an improvement of AnyV3, might be worth revisiting old tried and true mixes for a new base. One thing that puzzled me is that on his huggingface that he has linked for the original Any Ink model, he states this in GPT translated Chinese=>English: >Many of today's SD models have a variety of problems.I want to make use of my limited ability to improve the current situation. So I used a lot of AI-generated images to refine this model >So I used a lot of AI-generated images to refine this model >This is a fine-tuning model based on stable diiffusion. The model is fine-tuned based on SD1.5.The model was fine-tuned with HCP-diffusion https://huggingface.co/X779/Anything_ink Not sure what to think about AI images for training data. From what he has said in various comments, Anyv3 was basically a merge of all available dreambooths that were created at the time (November 2022 and back basically) so we know that regular Anyv3 wasn't finetuned in anyway. Could be he trained 1.5 base instead of NAI with AI art and then just used the same merges? Results don't look that greatly deviated for the "AnyV5Ink_Ink" variations which is the finetuned one according to the shit GPT translations.
>>22071 These mixes make no sense to me. Seems like some random combination just happens to look good sometimes for no reason.
>>22072 Chink brute forcing
>>22068 considering how popular the mori shitposts were i don't think most of /vt/ gives a shit about vtuber RVC, you might find some seething diehard fans on twitter or whatever though
>>22072 most anime model mixes are just mashing random shit together and hoping it takes off, 95% of people posting them on civitai are retarded and don't understand that popular mixes either involve finetuning, block merging, or careful trial and error to get their results
>>22075 Would it be feasible to convert models into loras or locons and then fine-tune the blocks in the A1111 UI before baking in the final settings? Seems like it could be way faster to iterate on small variations of multiple models rather than doing a full block merge run.
>>22076 i don't do model merges so i'm not entirely sure but if you can do a clean diff between a finetune and its base model then merging in the locon should theoretically be the same as merging the models, or at least very close
>>22077 Seems like they got their correction.
>>22080 cute! was the consistent face just luck?
(917.71 KB 640x960 00056-2146591760.png)

(892.61 KB 640x960 00064-996952322.png)

>>22071 Aside from the very different looking images, I can’t tell which of these are better
(1.90 MB 1280x1600 catbox_v82zv9.png)

Easyfluff test
>>22087 dude......
>>22087 whats with the testicle cameltoe bulge lol I assume this is Fluff's doing
(1.64 MB 1280x1600 00094-2033408639.png)

(1.70 MB 1280x1600 00088-3343329826.png)

(1.70 MB 1280x1600 00086-3343329826.png)

(1.74 MB 1280x1600 00100-938687042.png)

>>22089 It's definitely biased towards dicks and bulges as I was getting them on girls without prompting for them. Still fun to play around with >>22087 Just realized this could be interpreted as something worse but I intended for it to just be salmon run themed and just mud not anything else
>>22090 just go to /d/
>>22089 PUFFY
Hydrus chads, is there a way that I can link several images together in a parent/child like set up how you see in Boorus? An example would be having a set of CGs of a location but different time of day or a character sprite with different outfits in the same pose.
>>22081 Thank you! More likely just luck, I even changed some loras between those 4. Although I feel like 4th one looks a bit different.
>>22095 Delete that shit before you get banned
>>22095 Yeah, delete that shit. Check the board rules.
Anyone got re-up of ponsuke? Anon down now
>>22099 Cute, is it rustle+comiclo?
>>22100 https://files.catbox.moe/r28lj4.safetensors You are gonna have to rename the file I guess: ponsuke_(pon00000)
So I just bought a 22TB drive because it was on sale for 280 instead of 450. Wonder how fast this shit is gonna fill up.
>>22093 select files - > manage - > file relationships - > set relationships It doesn't seem that there's an exact equivalent to parent/sibling (where one is marked as the master image) but after setting this you can then right click - > managed - > file relationships - > view alternates
>>22103 Not very fast unless you like hoarding models
(1.78 MB 1024x1536 00937-2165863684.png)

(2.04 MB 1024x1536 00002-2117826214.png)

(2.10 MB 1024x1536 00007-2710812814.png)

(2.24 MB 1024x1536 00010-1160680015.png)

(2.08 MB 1024x1536 00038-1241678787.png)

(1.98 MB 1024x1536 00428-732887955.png)

(2.07 MB 1024x1536 00011-3673405703.png)

(2.25 MB 1024x1536 00058-1507740435.png)

>>22106 >>22107 This looks soulful.
>>22098 Box of all of them pls they camel out nice
>>22099 Oof those ones are good and I like space lolis, please catbox of all?
edge? or mega? decided that i can't download from mega anymore, the downloads complete inside mega but i never get the download prompt from the browser did a full refresh + restarted the browser and then my pc
>>22111 aaand now it works fine it is a mystery
>Enable quantization in K samplers for sharper and cleaner results. This may change existing seeds Just noticed this setting. I think I've read about it before but don't remember what. Should I enable it or not?
2views with a good style melting down episode #4945827 scraping rn
I'm being extremely lazy right now since I don't really have anything too specific in mind but the AI managed to surprise me
>>22114 It's embarrassing how these artists will sabotage themselves by degrading their own art or actively making their shit harder to find
>cleavage cutout and clothes cutout in the negatives >boobs out instead Now you're just doing it out of spite.
>>22117 >completely nude fuck this, restarting
>>22114 >he thinks people rely on booru tags and not just tagging images themselves lol lmao
(1.88 MB 1024x1536 01635-822664568.png)

(2.25 MB 1024x1536 00012-1539867596.png)

(2.09 MB 1024x1536 00022-4190265881.png)

(2.15 MB 1024x1536 00291-819837823.png)

(1.99 MB 1024x1536 00131-1162751549.png)

(2.31 MB 1024x1536 00004-3233191665.png)

(2.31 MB 1024x1536 00056-3891317098.png)

(2.36 MB 1024x1536 00002-1078671810.png)

(1.34 MB 960x1320 catbox_qgth6l.png)

(372.40 KB 1024x1536 catbox_iezlka.jpg)

>>22101 it's this mixture. haven't heard of comiclo, you using the lora from civitai? will check it out >>22110 space lolis make me hopeful for the future https://files.catbox.moe/2z24bs.jpg https://files.catbox.moe/d8k6ts.jpg https://files.catbox.moe/0v7vby.jpg https://files.catbox.moe/98dlx1.jpg
>>22124 comiclo is the takamichi lora from civitai yeah, someone from here or 4ch also made a takamichi one that's in the lora list but the civitai one is baked a bit harder
>inpainting shinobu into libraries and gardens none of you can stop me
(493.40 KB 1536x2304 catbox_3aq6xu.jpg)

(71.34 KB 512x768 12278-0-depth.png)

>>21929 >darkblue640.png god bless you anon
>>22126 I never thought of this but inpainting anime characters into photos sounds like a very cute idea, be sure to share some if they come out well.
(419.88 KB 1024x1536 catbox_cp7wze.jpg)

(374.74 KB 1024x1536 catbox_e3qwwa.jpg)

(289.55 KB 1024x1536 catbox_qhth5q.jpg)

(360.09 KB 1024x1536 catbox_ruat7e.jpg)

>>22101 >>22125 this mixture was a great idea. without a bit of rustle, couldn't get penetration (see last pic). using this civitai lora, not sure if the best https://civitai.com/models/149039/c-comic-lo-takamichi-style
>>22129 tbh rustle is pretty bad at penetration too since his works are heavily censored, probably would work best if you put a sex lora in at like 0.3-0.4 weight or inpainted the sex zone with easyfluff
(1.70 MB 1024x1536 catbox_9ydqzj.png)

>showering concept LoRA works with regional prompter Time for gacha.
>>22131 very nice. need to try out regional prompter again
>>22132 I enjoy it but the gacha hard mode edition + the touch ups you often have to do can be a bit of a pain at times.
>>22114 >makes money by whoring up underage characters from others IPs for profit >complains about parasites
(4.69 MB 2048x1365 catbox_107ywk.png)

>>22128 it's pretty fun to be honest
(2.05 MB 1280x1600 catbox_hnh7qk.png)

Super healthy space
>>22135 Cant wait for the endless JustinRPG-like images once AI becomes a common commodity of lolcows marrying their waifus and what not.
>>22137 der ewige kiwinigger
>>22138 Nah im just joking about it, I'm not about to sit here & be on a board for weirdos made for obsessing and discussing even bigger weirdos. I did watch the first few parts of the chris-chan docu which was pretty entertaining but just dragged as it went on.
>finish doing dataset prep I planned out >decide to do just a little bit more to round the edges with a couple anime episodes >finished it and now doing even more dataset prep from two more movies >I still haven’t done tagging clean up I must have gone insane or hate myself
>>22140 "You can only reach perfection with ambition"... or that's what your brain thinks, probably
gimp is so trash, washes out the colors when it saves.
>>22143 Is this rustle + takamichi? Looks very cute. I started using an auto1111 plugin to automatically add saturation to my gens but before that (and for non-txt2img stuff) I'd manually increase saturation in my image editor. I use paint.net for quick edits instead of GIMP though, which has its own problems but I've never had it fuck with the color profile too much.
>>22143 >>22144 the cfg rescale ext has a convenient color fix option works well, but it saves both images (and that can't be changed it seems)
>>22143 Very cute!
>>22145 I just grabbed this extension, turn it on and set the color slider to 2 for low saturation outputs and it instantly gives much more vibrant reaults: https://github.com/Zuellni/Stable-Diffusion-WebUI-Image-Filters
I haven't visited since collab is kill since I'm still scavenging PC. How's Intel GPU for stable diffusion and general use?
>>22141 My brain is also aware of the reality that is "Perfect is the enemy of good" and "forget sleeping, finish this shit now"
(1.02 MB 1024x1536 00030-3293323123.png)

(1.31 MB 1024x1536 00953-2165863683.png)

(1.51 MB 1024x1536 00007-406041822.png)

(1.61 MB 1024x1536 00030-892901037.png)

(1.24 MB 1024x1536 00326-3136454020.png)

(1.82 MB 1024x1536 00114-435854864.png)

(1.38 MB 1024x1536 00029-1676324398.png)

(1.64 MB 1024x1536 00092-521164717.png)

>>22150 Cute
>>22143 take the photoshop 2019/v20 pill already
>>22148 A770s are okay for 1080p gaming but I still wouldn't. How much money do you have? Used 2080 Tis are getting cheaper every month
>>22153 yeah probably. but even pirated photoshop is full of weird DRM bullshit. I don't want adobe to see my lolis
>>22155 >but even pirated photoshop is full of weird DRM bullshit No it's not lmfao, give a source or stop spreading lies you've heard from some troonix privacy nut on /g/ >I don't want adobe to see my lolis 0.0.0.0
>>22157 now draw them pissing on each other
(1.64 MB 1024x1536 00037-63655112.png)

(1.86 MB 1024x1536 00134-2066608030.png)

(2.32 MB 1024x1536 00298-774915681.png)

(1.92 MB 1024x1536 00125-3128444809.png)

(1.58 MB 1024x1536 00104-843906081.png)

(1.98 MB 1024x1536 00173-174249619.png)

(2.23 MB 1024x1536 00017-3663672638.png)

(2.01 MB 1024x1536 00258-3204285780.png)

>>22155 >>22156 If you want to use the newer/est pirated Photoshops and other tools like Lightroom, AE, etc, use need the GenP crack and constantly be on the lookout for new phone home addresses that need to be blocked. Otherwise get a 2019/2020 crack of Photoshop and you are good.
>>22161 At worst you'll get a popup and you'll need to run the autoblocker again on newer versions. They literally can't do shit to you.
>>22161 i've been using a version of 2022 i got off rutracker for like a year now without any problems
>>22162 Yea, Adobe can't spy on your local PC contents, only whatever telemetry is sent back to check if you have a crack version and most of it is preblocked. And if the autoblocker doesn't work, you just need to put the new phone home address in your system32 host file. The only thing you won't be able to play with on the latest cracked versions is the AI tools like in Adobe Firefly or the Generative Fill options in photoshop because they locked it behind the Cloud subscription and requiring seeing credit card info on an adobe account. >>22163 If you aren't getting that unauthorized pop up then you are fine. I started getting them when using other Adobe shit so I needed to get GenP or else even just regular photoshop would get the stupid pop ups.
>>22164 Never needed those AI features and 2021/2020 is already bloated dogshit so I just went back to 2019/v20 on both machines (but had to switch to Affinity Photo on my Macbook because Adobe products on OSX are absolute garbage) I'd say 2018/2019 are the last usable ones regardless of your hardware or people's fears of Adobe sending their lolis straight to the glowniggers.
>>22165 Yea I don't use the AI shit, I was just pointing out that is really the only limitation of using cracked Adobe, and no reason to be scared of Adobe snatching your lolis,
>>22166 Oh shit they're
>>22167 Anon what hap
>>22167 >>22168 Wtf am I having a stroke?
>>22169 Don't say the name Candlejack or e
(1.81 MB 1024x1536 00014-4171601081.png)

(1.85 MB 1024x1536 00027-1440126366.png)

(1.43 MB 1024x1536 00032-320499909.png)

(1.92 MB 1024x1536 00079-3048712922.png)

(1.12 MB 1024x1536 00453-2176735003.png)

(1.39 MB 1024x1536 00113-3676383606.png)

(1.67 MB 1024x1536 00136-3317337180.png)

(1.70 MB 1024x1536 00004-1772634080.png)

Is there a Fumikane lora? Not on gayshit
What are the usual parameters for a good locon extract of a model?
>>22157 very nice! >>22166 >>22167 >>22168 >>22170 pour one out downloading PS 2019/v20 now. gonna have to try remember all the shortcut keys
I, for one, don't think about Adobe looking at my cunny editing, don't think they'd do that, but after encountering crack malware I completely avoid anything that involves cracks even if it's a very popular torrent. There's no safe way to check for embedded malware like keyloggers, right? I should overcome this mental block somehow because fuck gimp.
What's a good model for loli stuff? Trying to gen Jack from Fate.
>>22176 i mean you can just block the application's outgoing traffic in your OS firewall which completely stops it from trying to send anything over the network
>>22177 UOOOOOOOOOOOOOOOOHHHHHHHHH young hard drive erotic >>22176 I don't pirate much anymore but fuck giving Adobe (or EA) money. Get an older torrent that's been around long enough to have been scanned, dump the .exe onto virustotal and if it comes up clean then you're good.
>>22176 as long as you get it from a trustworthy torrent site you're good, rutracker/1337X is generally safe bet, still as other anon said scan your shit if you're really paranoid.
>>22158 But that's illegal. >>22175 Thanks.
>>22183 now that's based
>>22177 >>22180 GET FULL OF DATA PLAP PLAP PLAP PLAP PLAP PLAP PLAP PLAP
>>22181 >as long as you get it from a trustworthy torrent site you're good, rutracker/1337X is generally safe bet, still as other anon said scan your shit if you're really paranoid. i would trust rutracker without scanning anything but i don't think torrents uploaded to 1337x are verified
I neglected fixing my atahuta lora, instead I mixed with a little 7010. I don't quite have the motivation for such a hurdle so for now just don't use it at weight 1 and put speech bubble in negative at like weight 2 https://mega.nz/folder/dfBQDDBJ#3RLMrU3gZmO6uj167o-YZg/file/sGgzmKQQ
(1.85 MB 1280x1600 catbox_bnr7vn.png)

>>22183 Nice butts
Speaking of cracks, anyone tried topaz ai for upscales? I've seen pretty impressive upscale results posted on cuckchan, but for 3dpd. Could it be used for anime?
>>22189 it works about as well for anime as it does for 3dpd, but i've never found it impressive
Experimenting with a Based64 style transfer block merge into Based67 since I want to try to "soften" 67's hard lines with 64's brushwork/brightness for wider style lora compatibility while also diluting the HLL character bias. The examples are style transfer and then 0.4 and 0.6 weighted sum back into 67, but the results are very 67 biased. I'll definitely experiment with it more since some of the individual gens are promising.
>>22191 >style transfer block merge how do you do that?
>>22192 There were some block merge recipes in a guide, being selective with which blocks you merge tends towards only specific traits of the model transferring: Style Transfer 1,1,1,1,1,0,0,0,0,1,0,1,0,1,1,1,0,0,0,1,1,0,1,1,1 Style & Composition Transfer 1,1,1,1,1,1,1,0,1,1,1,1,0,1,1,1,0,0,0,1,1,1,1,1,1 Composition Transfer 0,0,0,0,0,1,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0 Character & Composition Transfer 0,0,0,0,0,1,1,1,1,0,1,0,0,0,0,0,1,1,1,0,0,1,0,0,0 Character Transfer 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0 I did "style and composition transfer" from 64 to 67. A friend of mine did the same for 66 to 67 to good results but I think it's going to be a bit harder for 64 due to the higher disparity in styles.
>>22191 I think it might be impossible to achieve this with merging. The hard lines in 67 and the brushwork in 64 probably live in the same layers. See this post: >>19538 What you could try do is merge 1 layer at a time and see what effect it has. Time consuming, but gives you the best chance of achieving the specific outcome. If you merge a layer and it introduces the hard lines, you know you have to avoid that layer.
(356.14 KB 1549x647 mbw2.png)

>>22191 >>22192 BasedAnon what did you do to make Based66 the best at compatibility with LORAs
>>22194 Yeah, I plan on doing extensive testing this weekend to try and get better results, it's my first time playing around with mixing beyond 2 models weighted sum and that was basically my first attempt. I did get softer results by not weighted sum merging back into 67 so there is much to be tested. >>22195 What program is this? Looks very useful even if just for making annotations.
>>22197 not a program, I just took it from: (its been posted on here before a few threads back) https://www.figma.com/file/1JYEljsTwm6qRwR665yI7w/Merging-lab%E3%80%8CHosioka%E3%80%8D?type=design&node-id=1-69&mode=design
>>22198 I'm glad that thing is somehow still getting updated.
>>22198 Holy fuck this is useful, I was scraping rentries and 2hu AI discord and somehow never found it or any resource with nearly as much information. If I have some spare time I might try to put together a new rentry with more obscure but useful stuff like this since it seems like the old /hdg/ ones aren't used or maintained anymore.
>>22200 I have only seen that guide posted here lol
>>22200 some guy named Baka Man made it, hes kind of a name fag on here idk really you could try contacting him he's been on this board here before. If you want to add shit, idk I just look to it for reference
(1.38 MB 2560x2688 output.jpg)

(171.04 KB 640x768 catbox_zsjpg1.jpg)

(215.17 KB 640x768 catbox_318dvc.jpg)

what a nightmare to try get a half angel half demon hybrid with regional prompting time for vidya
>>22196 Those came with a nice feeling, catbox?
Haru and Salt have officially left the dev team for Waifu Diffusion
>>22183 nice
>>22206 They have basically been absent at least since the WD 2.0 tests so this isn't much of a surprise to anyone in the loop. To expand on what i said on /h/ I think SDXL is sunk cost and anyone trying to finetune it is wasting their time, I don't get why people are so persistent with it instead of exploring alternatives. Touhou AI focusing on LLMs is also silly since I'm not really sure what they can provide that other people aren't already doing to great effectiveness. Nothing short of some unannounced miracle breakthrough makes their projects worth following now, it's a shame since WD 1.3 was such a huge deal as an open source project and their plans after that showed so much promise.
>>22209 >focusing on LLMs they said Pygmalion type models it's even worse than you'd think
>>22206 Jesus, that never went anywhere huh. They spent all that time training and got a couple half-assed models
>>22209 Both the decision to focus on SDXL (and even continuing to) and shifting focus to LLMs is really fucking baffling to me and at this point I can only believe that even very knowledgable AIfags have no fucking clue what they're doing. I cannot find a value proposition argument for an LLM made by them unless they figured out something no one else has.
(6.44 MB 3456x1626 tmprage4qw8.png)

>>22191 I negatively block merged a random boldline lora on gayshit to based67 but can you elaborate on the "HLL character bias"?
>>22213 >HLL character bias Not him but he means trying to reduce the strength of the HLL, the hololive finetune model, in the mix.
>>22191 What about a simple weighted sum merge? I've tried 80% based64 + 20% based67 and it doesn't seem to release mustard gas
I finally fucking finished, no more dumping extra work on myself. Just running scripts and organizing shit today so I can start getting a new model training going.
>HLL deleted 5.5 >Says ver6 is coming in a few days
>>22217 >zyugoya lora holy fuck, except it already 404'd before I could download it
>>22219 https://gofile.io/d/BvJSTa here is my try on it again
(2.09 MB 960x1440 catbox_efzo18.png)

(1.71 MB 960x1320 catbox_ccalvh.png)

(1.81 MB 960x1440 catbox_v88ydf.png)

(1.80 MB 960x1440 catbox_cpwj15.png)

>>22221 fuckin nice, only issue that crops up for me is the hair gradients sometimes showing up unexpectedly but otherwise a very good LoRA to reduce the lack of options when generating more mature bodies
(1.69 MB 960x1440 catbox_2ljafu.png)

(1.90 MB 960x1440 catbox_cdn6ia.png)

(1.53 MB 960x1440 catbox_z5mxn2.png)

(1.68 MB 960x1440 catbox_ilo5da.png)

>>22222 It's not always an issue, though. I see you didn't tag the characters but it seems like you can guesstimate them with the right tags (especially Ochiai). If you rebake I wouldn't add characters but I would go back and add unusual/spiral pupil tags since those seems to be missing but i'll have them randomly pop up anyways.
>>22223 sounds reasonable will look if I can do a better job tomorrow
(4.21 MB 498x469 skill-issue-w.gif)

>>22176 >but after encountering crack malware
(1.31 MB 4000x2493 catbox_za0lsr.jpg)

(1.25 MB 4000x2493 catbox_51tk6l.jpg)

(1.25 MB 4000x2493 catbox_q89a5b.jpg)

(1.34 MB 4000x2497 catbox_eqkvl6.jpg)

https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/DxRWFRJb Mite as well post this in case someone finds it useful, it's a version of the b64-b67 hybrid I was experimenting with earlier in the thread that I'm happy with. Recipe is very simple and included in the MEGA. >>22213 >>22214 Yeah HLL is a Hololive finetune so it causes certain traits like horns or various kemonomimi to tend towards the versions that are present on Hololive designs. It's not super noticeable in the Based mixes that use it but the block merge I used is specifically intended to mitigate it.
making an aruma lora also a little messing around with custom depth maps
>>22228 Really like the perspective on the first pic, almost reminds me of that one scene from Eromanga Sensei
Has anyone trained a better pregnant LoRA than the old one on MEGA?
(1.37 MB 4000x2138 catbox_l8gc2b.jpg)

(1.31 MB 4000x2138 catbox_nvrsk2.jpg)

>>22221 the style is there but the tenc is a little burnt, it's less responsive to prompts compared to the old zyugoya lora
(4.07 MB 4096x858 file.png)

>>22231 the same set of seeds, the new zyugoya lora at weight 1.0, with tenc switched off
>>22231 >>22232 well call me an idiot for I have no idea what tenc is what should I change before putting it back into the oven?
>>22233 nta but probably lower or turn off the text encoder learning rate.
>>22234 yeah I am an idiot time to put it back into the oven without text encoder
>>22235 or just with a very low learning rate like 4e-5
>>22236 https://gofile.io/d/wAh3D7 now without text encoder Maybe someone could tell me what the 'best' recipe is for cooking lora? Long time ago there was a shitstorm about which weights to use but maybe some acceptable rates are now public knowledge?
>>22237 There is no guaranteed one size fits all config, but all training parameters are public knowledge since you can just look at the metadata in your UI to see how it was trained. The general rules I follow for training are 32/1 16/1 dim/alpha locons for styles (tring to get 200+ images) and 8/1 for characters (aiming for 50+ images, typically bump this up to 16 or 32 if doing multiple outfits), 1 repeat and adjust epochs/batch size to reach 1600-2400 total steps.
(2.84 MB 2048x1644 xyz_grid-0004-2242532613.png)

(1.83 MB 1536x1230 xyz_grid-0005-36717428.png)

I notice that a lot of models seem to have issues with "smug" for some reason while vanilla based64 was pretty okay with it. When I used 67 I had to inpaint the mouth pretty often because it looks kinda fucked up, for example. The problem seemed to carry over to >>22226 anon's merge. Also b67 sometimes gives a bit fucked up eyes with some loras while b64 works okay. I'll try this merge some more though, looks interesting.
>>22239 >I notice that a lot of models seem to have issues with "smug" for some reason its a NAI dataset issue
>>22240 based64 doesn't seem to fuck it up as often though. is it because of hll mixed in?
>>22241 Would depend what is the difference between 64 and 67 aside from the version of HLL
>>22242 67 doesnt use hll at all
>>22243 well then
>>22244 Yep, it explains a lot since HLL is a finetune with a fairly large dataset and better captioning tools than NAI used.
>>22245 Yea so I was right, its a NAI issue
>>22237 All of my loras for the past 6 months or so are made with the same settings other than turning flip_aug off for characters https://pastebin.com/dDCRKmNn The only thing that changes between datasets is me changing the repeats to make every dataset roughly match 500 total images
>>22247 Also with these settings I consider 60 true images to be the minimum for a 'good' lora
(13.76 KB 181x181 1683192786714170.png)

>>22239 Damn brat! waving at me with that expression!
(901.95 KB 1152x1440 00001-3793068043.png)

(1.56 MB 1024x1536 00009-874357666.png)

(1.72 MB 1024x1536 00047-110292189.png)

(1.95 MB 1024x1536 00017-3522121602.png)

(1.99 MB 1024x1536 00029-261771742.png)

(1.33 MB 1024x1536 00144-2269162581.png)

(1.63 MB 1024x1536 00172-2664610533.png)

(1.62 MB 1024x1536 00042-2868478594.png)

(1.54 MB 1024x1536 00002-3660850557.png)

>>22250 catbox?
>>22237 Simply fix the typo in your training paramters would yield much better results: tenc_lr=3.2e-4 unet_lr=6e-5 makes no sense for a style lora or any lora at all, I guess you meant unet_lr=6e-4
>>22256 >tenc_lr=3.2e-4 unet_lr=6e-5 I think he confused the values he took from lazylora anon's settings, he uses unet=3.2e-4 usually
>>22257 It looks like the exponent on the tenc is just incorrect since it's typically recommended to set tenc to about half of unet, higher if reinforcing certain tokens. 6e-5 unet 3.2e-5 tenc is pretty reasonable for a style LoRA, though personally I usually have to go around or higher than 1e-4 unet since I train with 1 alpha.
>>22258 >since it's typically recommended to set tenc to about half of unet says who lmao
>>22259 says ME bitch
>>22260 Based.
>>22260 then i don't know where you're getting your information. the "typical recommendation" has always been text enc an order of magnitude lower than unet, not half.
>>22262 says who
>>22263 i'm not going to spoonfeed a retard. every training rentry along with the original kohya sd repo suggests this. you can check archives and also the metadata of well trained loras.
>>22264 Both Kohya and Lynn recommending setting tenc to 1/2 unet, pretty much every non-dadpt LoRA/LoCon I've trained follow this with some actually using a tenc closer to 3/4 if I'm training a concept or character which requires a custom token or heavily overriding an existing one. Setting tenc lower than 1/2 is probably fine for styles since you can sometimes turn off the tenc for them entirely and they still work, but outside of that I can't imagine a tenc LR 10e-1 lower would be able to pick up important information...
the /sdg/ threads have been so bad lately on 4cuck I've been visiting the /lmg/ boards and trying to read up on all the shit on their end of the AI spectrum.
>>22265 >Both Kohya and Lynn recommending setting tenc to 1/2 unet show me where on any of the sd-scripts training readme's does it say this. as for lynn, literally who? >I can't imagine a tenc LR 10e-1 lower would be able to pick up important information this is just blatantly wrong and to begin with you can't even making this determination based solely on learning rates. off the top of my head i remember the penis on face concept lora using a unet 1e-4 and text enc 4e-5 and the imminent penetration concept lora using unet 1e-4 and text enc 5e-5. text enc being half of unet has never been typically recommended anywhere since we started experimenting with loras in january. in fact, the only mention of this i can even think of is for adaptive optimizers, which we are not talking about.
>>22267 >off the top of my head i remember the penis on face concept lora using a unet 1e-4 and text enc 4e-5 and the imminent penetration concept lora using unet 1e-4 and text enc 5e-5 my brother in christ those tenc values are 1/2 of the unet values
>>22266 >the cringe contest between /sdg/ and dalle Absolute fucking retards tbh
>>22268 oh yeah my bad guess i'm shit at math. regardless, i've seen a lot of learning rates like 1.5e-5 text/1.5e-4 unet and 5e-5 text/1.5e-5 unet but i suppose i'll concede that may be outdated and it may not be typically recommended nowadays. everything else i said is still true though
>>22268 >my brother in christ hahaha
>>22273 that first one might be a little too puffy. you're approaching ballsack level puffiness
>>22269 The silver lining of that stupid situation is that the SAI bought and paid for shills are the ones most visibly upset in regards to those participating, everyone else doesn't give a shit.
>>22274 let him cook
you made it to the desert loli planet
>>22277 finally, a good NMS planet
(980.50 KB 704x1000 gel035.png)

I've been wanting to make an Ike artist lora, but my gpu's old and frail, and apparently google didn't like me using their compute for text cooming a few years ago or something. Someone feel like doing it for me? https://files.catbox.moe/qol3ul.zip Here's all of the training images I got from manga covers and color pages, as well as some stuff from pixiv/gelbooru. I cropped or edited out at least 95% of the text there was on the images. Tags with wd14 moat tagger v2 and roughly curated by hand afterwards. I'll adjust shit if needed.
>>22220 >>22278 Nice. Glad you liked the lora.
>>22277 c@box pls
(2.05 MB 1024x1536 catbox_9hbltx.png)

(2.04 MB 1024x1536 catbox_qhmre8.png)

(2.24 MB 1024x1536 catbox_9xfihi.png)

(2.13 MB 1024x1536 catbox_f3d5wf.png)

>>22251 Welp feels wrong to use this guy's style for what I'm using it for. >>22281 Thanks for baking and sharing.
>>22277 I 100% guarantee this will be a mod in Starfield
>>22284 >>>>>starefield
>>22285 so really funny thing, the uncanny staring is largely due to bethesda being bethesda and forgetting something as important as AO maps for the face textures, naturally someone already made a mod to fix it: https://www.nexusmods.com/starfield/mods/2718
>>22285 >tfw introvert and don't like to make eye contact even in vidya yeah that sucks for you guys
Why would I give bethesda any money?
>>22282 there was quite a bit of inpainting of this original gen. but damn the pelvic curtain does it for me. https://files.catbox.moe/pzm3ug.jpg
>>22289 just noticed a mistake in my template with both standing and sitting specified. probably why I was getting a few 2girls gens off this prompt.
>>22288 >paying for games lmao
>>22280 working on it now, in the future i recommend using swinv2 over moat since it tends to be more accurate also you forgot to flatten one transparent image, otherwise very good job cleaning the dataset shit is preem
>>22288 last time i gave bethesda money was bloodmoon and that was a mistake
>>22292 Thank you so much! Looking forward to the result!
>>22283 >Welp feels wrong to use this guy's style for what I'm using it for. he designed 2b's ass for a reason, i'm sure he's fine with it. i love the thighs on the 4th one
>>22292 >in the future i recommend using swinv2 over moat since it tends to be more accurate i'm not convinced of this. got any test comparisons?
(166.40 KB 1328x985 catbox_4mlnff.jpg)

>>22292 >>22296 top is moat, bottom is swinv2, imo moat is more accurate, besides serafuku missing (23% confidence which was below the default 35% cutoff)
>putting intricate in negative made hands better
>>22298 intricate has always been a placebo
>>22300 it definitely wasn't in the sd 1.5 days, i did a shitload of testing
>>22298 That whole "8k digital wallpaper intricate detail trending on artstation" and all those other natural language words were indeed placebo in the same vein in how "greg rutkowski" worked as a magic quality booster despite having little presence in the dataset and the image not really looking like any of his work.
(423.88 KB 512x768 00121-568008584.png)

(560.35 KB 512x768 00131-568508584.png)

(667.68 KB 512x768 00142-5685095585.png)

>>22299 God fucking bless you. It's a little stuck on that one hairstyle, but I love it!
>>22286 no one will care because the "people" (read: grifters who don't e/v/en play games) have already decided that starfield bad
>>22304 But it is a mediocre game.
>>22305 it's pretty good, better than fo4 at the very least
>>22305 its basically a true bethesda sandbox, without their bullshit awful lore. cant wait for mods since none will feel out of place
>>22307 waiting for space fishing
(1.54 MB 1024x1536 catbox_zq4u0n.png)

yes hello welcome to demon church, confess your cool sins unto me
>>22310 >confess your cool sins unto me i cheated at the class' q3a tournament on the last day of high school to win my IT professor's old gtx 690
>>22312 Also for some reason he deleted his older versions Why?
>>22302 >people still pretending laion doesn’t exist >and not knowing the definition of placebo
>>22314 just add tranny sex to every gen then
>>22315 source: i forgot to take my meds
>>22302 >despite having little presence in the dataset what's the source for this claim? laion aesthetic website doesn't show the full dataset you know?
>>22318 The Greg Rutkowski thing was stated by SD1 devs (don't recall if SAI or RunwayML) and that "greg" by itself pulled a ton of favorable weight when producing images that included Greg Rutkowski in the prompt. This has been known since the SD launch. Also while it is true that that website doesn't hold the entire dataset, that portion is the set with the highest aesthetic rating, meaning it pulls the most weight of everything in Laion. But you should know better that Laion is shit, and no CLIPs at the time of SD1.x would ever use those kind of subjective descriptors to caption or tag images. >>22314 >It made a change I like so it's not a placebo even if it didn't actually understand the concept many such cases, like the idiots prompting "picture perfect" or some shit thinking that the ""AI"" knows what that eve means.
You think it's possible to make a lora that keeps tails attached where they're supposed to be, or is that a lost cause?
>>22321 possible? yes. doable without any style interference? highly difficult. fluffyrock does tail attachment perfectly though that's obviously because it was finetuned on a dataset primarily of characters with tails and so most back shots had tails. i would honestly just gen without a tail and then sketch it in and img2img if i wanted the placement to be perfect every time. if you wanna do a lora you're going to need to slow bake a 500+ image dataset with lots of good shots in different styles showing clear attachment of various tail types.
>>22322 yet another NAI dataset/tagging issue?
>>22323 probably a combination of that and nai just having such a large dataset that lesser concepts get diluted, like it nails the IDEA of kemonomimi and tails, but gets confused by placement since that's only a small subset of the overall data
>>22322 I haven't played around much with sketching. How accurate do I need to be to 1) actually get something that looks fluffable, and 2) not fuck up what's already there? And I'd put tail in the negative prompts, then put it in the positive when doing img2img, or what's the best way to go about it? >>22323 I think it's the same issue like with fingers, but unlike fingers, only a fraction of the dataset has them, so they just sprout out of wherever they damn please.
>>22325 You don't have to be terribly accurate, just block in the general shape with the colors you want and inpaint around that region for minimal destructiveness. Shouldn't have to negative anything either unless you have issues with tails popping up unprompted, but yeah you iust put "x tail" in positive with the otherwise identical prompt for img2img/inpaint.
(237.50 KB 768x1152 catbox_wfxxfv.jpg)

(315.32 KB 768x1152 catbox_5v7u8j.jpg)

(258.27 KB 768x1152 catbox_wxhu1z.jpg)

(251.65 KB 768x1152 catbox_ukhzyf.jpg)

>>22299 cute locon!
>>21457 Bro, you gotta tell me your secrets. Looks awesome.
Any suggested LORAs for pussies? Base models kinda fucking suck at pussies, probably because most of them are censored in porn but any good loras you friends would recommend so SD learns how to draw proper ones?
>>22310 cute demon! hail satan
>>22329 there's a puffy vulva lora in gayshit that's good but a bit overbaked, if you want more flexible genitals download easyfluff and use it for inpainting, since furry datasets don't have censorship they're infinitely better at gens
>>22331 Can I trouble you to link it? Since there's a lot of pussy loras out there
>>22333 >Google Easyfluff >mfw How can you get anything usable out of that
>>22334 inpainting genitals
>>22335 Wouldn't they also come out furry?
>>22336 what do you mean come out furry, e621 still does have plenty of furry art and 90% of furry porn has human genitals anyways. that's why it's so good at it, right behind not being censored like most anime-style porn
>>22337 plenty of non-furry art* god damn my brain is melting today
>>22336 There's tag "not_furry", you can also set autocomplete to e621 mode. I only used it to inpaint genitals (mostly dicks, but it's better at only masked inpainting pussies too if you use innie pussy or peach pussy tag) you can also use it to generate a smut scene without using loras and inpaint the result for the looks/style, at least one dude on /h/ is doing this
(1.88 MB 480x480 mfwLoliGens.gif)

>>22330 UOOOOOOOOOHHHHHH Mesugaki demon lolis torturing me for all eternity
>>22340 >mfwLoliGens.gif lol
>>22343 >Oni demon lolis Thats amazing, the punchline is pretty great as well
(2.63 MB 1536x1536 00007-4056227628.png)

>>22028 where is the andy_find lora?
>>22343 that was great. >>22347 skirt lift, from below is a great combo. also >perfect hands ToT
>>22319 >>It made a change I like so it's not a placebo even if it didn't actually understand the concept yeah you're right that means it's not placebo. honestly get a dictionary
(1.84 MB 1024x1536 00146-260120612.png)

(1.76 MB 1024x1536 00135-4168013323.png)

(2.13 MB 1024x1536 00046-327434204.png)

(2.12 MB 1024x1536 00432-905632919.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added yamakita higashi >>22320 lol i already have a maigon dataset i've yet to sort through. gomudan looks nice >>22328 https://files.catbox.moe/yadzrg.png i mean here is a catbox for one of them really it's just the 4x-eula-digimanga-bw-v1 upscaler fixing the hatching on them
(1.93 MB 1024x1536 00054-3863046930.png)

(1.85 MB 1024x1536 00055-2182176173.png)

(1.96 MB 1024x1536 00029-1066745827.png)

(2.23 MB 1024x1536 00355-3186107813.png)

>>22350 Holy f catbox plz
>>22258 The original repository had information that TE have almost three times fewer trainable parameters than UNET. That could explain why 1/2 or 1/3 lr's are fine for it.
(1.95 MB 1024x1536 00023-2558853728.png)

(1.74 MB 1024x1536 00031-3877181346.png)

(1.99 MB 1024x1536 00045-3228139961.png)

(2.17 MB 1024x1536 00303-1931398897.png)

(1.81 MB 1024x1536 00016-388016295.png)

(1.84 MB 1024x1536 00027-223774257.png)

(1.87 MB 1024x1536 00033-4160090114.png)

(1.92 MB 1024x1536 00045-709874690.png)

>>22322 >>22326 I must be doing something wrong, because even just fixing a tiny part of the tail in a simple image has a miserable success rate, and even when it does kinda work, it looks inconsistent. Trying to get it to do an entire tail is basically impossible. Got any advice? Also if I were to start collecting that dataset, would it be a bad idea to crop the images down to just the tail and the surrounding area?
Is lametta/rabbit models just another shitmixes? the mesugakis are cute but style lora compatibility seems kinda bad. I'm just getting tired of based64 bros
>>22349 So if sugar pills cure cancer then its not a placebo? I think you actually need to read the entry instead of just holding the book ready to throw it anyone who disagrees with you.
>>22357 use lower denoise?
https://twitter.com/novelaiofficial/status/1712161100421107904 Looks like they have been working on NAI 2.0
>>22362 Cool but I bet this won't leak.
>>22362 given how much they would have learned since they did the original NAI, this one should be insanely good. sad that we won't be able to get our hands on it.
>>22362 a second hack is unlikely, but i remember the nai crew doing some sullen muttering after the hack about how they had planned to release some stuff anyway once the turk had gotten his paper published
>>22345 Catbox?
>>22360 if we go along with your absurd hypothetical and and say sugar pills did in fact cure cancer, then yes, they wouldn't be placebo. if it was something else besides the sugar pills that cured the cancer, it would be placebo. it's really not that complicated.
>>22367 >about how they had planned to release some stuff anyway d'you know, somehow i don't believe they would
>>22370 good news, we're probably going to get a second opportunity to find out!
(2.74 MB 1664x1120 1.png)

>>22362 Image doesn't look particularly impressive, we'll have to wait and see if it's worth anything.
>>22371 what could they possibly release that would be beneficial to us? i doubt they'll release the model and the info isn't much use to us considering A) we don't have the hardware to train a model anywhere close to the same scale B) we've seen how hacked together NAI 1.0 is
>>22373 >how hacked together NAI 1.0 They were racing to market on NAI 1.0. There are bound to be mistakes when rushing something for a deadline. Even so, the result was a huge step above anything else at the time. And I think it was NAI that introduced resolution bucketing, which allowed portrait and landscape. I think it's fair to say they're quite competent. I haven't followed SDXL at all, but it seems it's censored. If they take their time to train an uncensored SDXL anime model, I expect that will be a leap above current models.
>>22374 >If they take their time to train an uncensored SDXL anime model You know full well SDXL is just one big sunk cost fallacy, UNet isn't the bottleneck
(1.64 MB 1024x1536 00039-3505726556.png)

(1.86 MB 1024x1536 00067-885909205.png)

(1.93 MB 1024x1536 00088-731053678.png)

(2.05 MB 1024x1536 00005-1853674881.png)

(1.31 MB 1024x1536 00034-1398084691.png)

(1.91 MB 1024x1536 00055-3493811197.png)

(2.10 MB 1024x1536 00328-97236986.png)

(2.17 MB 1024x1536 00102-2303466295.png)

>>22375 I don't know anything about SDXL. I keep hearing it's 'bad' but I can't run it on my PC anyway so don't care. But regardless of what model they train, I think if NAI takes a second crack at a model trained from scratch, it will be really good. I'll shut up now, since it's just my opinion. But I just wanted to point out that shitting on the first NAI model is pretty easy in hindsight, forgetting the context of when they were training and releasing it. Have a loli.
NAI was never gonna be an XL finetune, they said they want to control their models. People assumed it was just LLM, its now clear this includes Imggen.
>>22378 > But I just wanted to point out that shitting on the first NAI model is pretty easy in hindsight, forgetting the context of when they were training and releasing it. no one ever shit on nai because of its quality (well, back then anyways). people shit on nai because they were pieces of shit shilling on 4chan without releasing their model publicly. man was it funny watching the reactions when the model got leaked
>>22379 would they train on laion from scratch and then finetune on booru?
>>22381 They don't need to finetune if they train from scratch because they have the entire dataset.
>>22382 with their language model, they trained from scratch on the pile and then fine tuned that on novels and similar
>>22383 I could be wrong but its technically just continued training, they just switch the main dataset and use the specialized training. I recall that if you use a saved state instead of the completed model it trains better.
(4.96 MB 1600x1280 00160-1669019880.png)

>>22362 I'm not hopeful but NAI would have to do the minimum of outdoing the shitmixes on Civitai let alone the functionality that people have come to expect with loras and controlnet. The release of dall-e 3 is also going to raise people's expectations.
>>22385 Since NovelAI has their LLM and has always been that two for one package, they don't need their model to compete with dall-e directly one for one. If it was only their model, they need a bit more work, granted they said its a preview so quality is up in their air. If the model is released publicly or leaked, it will get tuned by others and will be the defacto winner. Infact any model that gets leaked and incorporated into Auto1111 and becomes the FOSS option will be the definitive choice unless it is also censored.
>>22376 1 and 2 are very nice
>>22389 from a deep learning newsletter that I don't understand. but looked like a good visualization of what is in the different layers.
>>22389 >>22390 Was about to say, because that shit looked nonsensical without context
>>22389 >>22390 Huh well, that's and interesting peek behind the curtain
>>22393 cute
>>22388 >>22393 Can I ask for catbox?
>>22393 Also what model and loras do u use?
>>22395 >>22396 man what is up with the newfags around here. literally scroll up and he has catboxes nigger
(1.63 MB 1280x1600 00406-631749851.png)

(2.67 MB 1600x1280 00356-2261041947.png)

(1.84 MB 1280x1600 00111-2944326650.png)

(2.48 MB 1280x1600 00151-2030411389.png)

Random tests
>>22397 I must be blind then
>>22399 see >>22327 read the filename
>>22400 Um, they don't look like using the same style lora, at least
>>22398 is the first one dokomon?
(1.48 MB 1024x1536 00051-678236032.png)

(2.17 MB 1024x1536 00112-3423985840.png)

(2.18 MB 1024x1536 00020-4071455608.png)

(2.24 MB 1024x1536 00442-3483510158.png)

(1.25 MB 1024x1536 00003-3749120793.png)

(1.38 MB 1024x1536 00005-1896968322.png)

(1.68 MB 1024x1536 00356-333013830.png)

(1.84 MB 1024x1536 00076-333013824.png)

I'm on an ancient commit and typing stuff into the pos/neg boxes is incredibly fucking laggy *now*, it's nearly unusable but it's fine when using --listen and accessing it from an iPad but even more unusable when accessing it through my SD845 phone. Why?
(1.34 MB 1024x1536 catbox_ymj7cp.png)

>>22403 now this this is better than podracing
Wtf happened to SD threads man?
>>22410 why, is there something going on
>>22410 On 4cucks? Who knows. I know /sdg/ is getting its just deserts with the Dall-e shills laughing at the SAI employees posting there but not anything in particular.
it seems HLL anon is no longer making finetunes and just Lycos. https://huggingface.co/CluelessC/hll-test/tree/main/lora
>>22404 Many thanks
(1.39 MB 1600x1280 catbox_ff8rz8.png)

Easyfluff test
(1.40 MB 1280x1600 00408-292594827.png)

>>22415 It seems flat chested + large nipples will be more difficult than anticipated
(1.92 MB 1024x1536 00015-1249672838.png)

>>22412 >I know /sdg/ is getting its just deserts with the Dall-e shills laughing at the SAI employees posting there but not anything in particular. as it should be lmao
Is Dall-E using different technology entirely? Or is it the same as Midjourney where it's just a specialized SD model?
>>22419 i mean it's a diffusion model, just a good one rather than emad's drippings
(1.47 MB 1024x1536 00024-945141438.png)

(1.53 MB 1024x1536 00059-2836068299.png)

(1.70 MB 1024x1536 00006-27782396.png)

(1.87 MB 1024x1536 00031-3390123059.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added kaamin (mariarose753) >>22407 catbox for second one? >>22415 hot. gonna have to try out that artist combo
(1.39 MB 1024x1536 00164-432326176.png)

(1.51 MB 1024x1536 00003-2280913049.png)

(1.54 MB 1024x1536 00278-2626884224.png)

(1.69 MB 1024x1536 00013-412500568.png)

>Jewidia will release a new architecture every year So instead of playing the "PLEASE GOY JUST BUY OUR LATEST $2000 GPU PLEASE IT'S FOR FUTUREPROOFING YOU'LL BE SET FOR HALF A DECADE PLEASE GOY" game once every 2-3 years it'll be yearly starting from 2024 whoopee
>>22421 Gladly: https://files.catbox.moe/3yq4n6.png I somehow forgot to remove the chair straddling LoRA out of the prompt though... >>22423 Jesus Christ.
>>22419 much larger text encoder (an actual LLM instead of dusty old clip) presumably better VAE / more latent depth channels so that text doesn't get mangled probably bigger unet most likely better quality training
>>22424 Sorry, did I say 2-3 years? I meant just 2 years. >>22425 >much larger text encoder (an actual LLM instead of dusty old clip) So the one thing that's holding us back?
>>22426 >>much larger text encoder (an actual LLM instead of dusty old clip) >So the one thing that's holding us back? i mean yeah that's the main thing it's got going for it
>>22426 >So the one thing that's holding us back? Yep. Furries are still tiptoeing around that idea since there aren't many people that can actually train shit there, and they're already occupied by one or more project. Lodestones tried using T5 and it seems to work, but he's busy with testing a lot of shit for fluffyrock (there's already two new branches of that model, a model using minsnr, and another that tries to introduce EMA weights again)
>>22428 >furries have realized that their bank accounts aren't actually bottomless and can't sustain 5000 USD commissions and 25K USD adoptables anymore so they're working on SD instead they're finally doing something useful huh
>>22429 >finally they've been at it for months, oldest commit to fluffyrock is like 7 months ago it's one indonesian tard with Google TPU gibs and yet they manage to actually make shit
I scraped together a ca 900 image dataset of fox lolis, and also manually tagged the number of tails up to five. It'd be great if someone could spoonfeed me a training config, because I have no clue what I'm doing with this shit.
>>22421 how do you feel about paxiti (to make gens with burning tranny flags of course)
>>22429 They have been outpacing SAI with Google gibs and small cash donations. Most of their work prior to getting the that Pod was on the equivalent of 16GB VRAM cards but expertly pulling features from 2.0 and better datasets. They haven't been able to do an SDXL model but everyone knows its pretty much a dead end. The only thing left for them to do, which is probably impossible for them because of hardware, is doing a full model training from scratch.
>>22398 Is the last one a style or a character lora?
>installed paint.net because PS won't start (my scratch disks are full) >paint.net update >install >installer removes the old version >"there is not enough space on the disk" >installer closes >effectively just downloaded an update then uninstalled itself
>>22435 >PS won't start you got hit by Adobe's latest anti-crack wave, refresh your host files, firewalls, and try to free up a little bit of space.
>>22436 or maybe, just maybe read the post yeah?
>>22437 You should take your own advice >try to free up space Adobe likes to cache shit on the hard drive so you can probably free space from its directory alone.
It's over.
>>22439 What? Did Israel fall already? Did I fucking miss it?
>>22440 god I hope Imagine waking up tomorrow and seeing the news "Africa and Middle-East vanished, replaced by seawater"
>>22441 >Imagine waking up tomorrow and seeing the news "Africa and Middle-East vanished, replaced by seawater" I'd be extremely upset, Africa has a ton of cool animals including the one my wife is based on. How about we just get rid of the niggers that plague it instead?
So HLL anon is done with finetunes? Guess that means Based anon should find an alternative or go back to HLL3
>>22444 Warosu is down so I can't check his specific post since its cycled out of normal archive, but from what I recall, he just said that the Lycoris was to skip steps for model mixing, but he did mention some threads later that the current bake did not play nice when merging. So not sure what he's doing, what he is thinking, or where this lycoris path came from since he had been sitting in on the furry discord reading their info and furries have been on point.
(1.75 MB 1280x1600 00212-3846269987.png)

(1.77 MB 1280x1600 00210-341948708.png)

(1.62 MB 1280x1600 00206-3974102223.png)

(1.74 MB 1280x1600 00184-3887568944.png)

>>22442 Ah yes, Halloween themed gens soon. I should have the stuff I genned last year in a folder somewhere, could be fun to look through them.
(2.35 MB 1280x1920 00522-3776594606.png)

>>22425 haha mayhaps SAI is already working on something similar I sure hope so
>>22449 >clueless they won't best SAI can do is poach other research, they haven't put out anything worthwhile themselves on another note, the furries seems to have figured out EMA weights and jax lion8bit, meaning their model that was starting to overfit quite a bit will most likely get even better, though that'll take some time
>>22450 I wonder if it's more trouble to finetune fluffyrock with anime human shit or start over with 1.5 (or 2.0)
>>22451 I feel like finetuning FR with human stuff would only be useful to monster girl enthusiasts actually i need 10 minutes with FR and a midna lora brb >:)
>>22452 >midna lora it already know midna well
>>22453 knows*
>>22447 >Ah yes, Halloween themed gens soon. oh boy, i almost forgot
(1.96 MB 1280x1600 00158-3776550396.png)

It's been a while since I last enjoyed just prompting this much without loras, regional prompt, or controlnet
>>22432 yeah i wouldn't mind doing that one
i notice with chibi girls you get more accurate toes and fingers. i got those feet for free, hands inpainted
>>22460 who dis? who dat? Some elaboration would be nice as I'm out of the loop.
>>22461 The creator of Fluffusion and a collaborator on Fluffyrock, meaning if he does an anime finetune it's going to be uncensored, preserve artist tags, and use all of the technical developments lodestone implemented that stability are allergic to.
>>22462 I should clarify that by uncensored I mean the dataset won't be gutted in any way, not that the artwork will be uncensored.
>>22460 I am praying I am literally willing to retrain every single lora I have made for this
>>22460 >>22462 There's been next to no significant advancements in like 10 months so I'm extremely sceptical
>>22460 sirs but sdxl...
>>22466 the unet sirs... it is very large sirs...
>>22458 this one is missing something, not sure
>>22462 Cool, thanks for the answer. If they're doing purely anime I really look forward to it. I've heard good things about furry models being "good" for doing non-furry shit but haven't tried it out myself to verify.
>>22468 kek improvement turned into a transition panel >>22469 furry mix made this fyi, original mixes dont have 4k chibi girls
>>22460 Any info on how large the dataset will be and how long it might take in terms of GPU time?
>>22470 Huh, couldn't even tell. Nice. One thing putting me off trying out fluffyrock was the overpoweringly western artstyle being posted when people were touting better genitalia. It's not obvious in your gen at all aside from the thick outlines, but then again that's not exclusive to western art.
>>22472 >aside from the thick outlines good spotting. it may have taken thick from fluffy. i only prompted chibi >overpoweringly western artstyle being posted when people were touting better genitalia kek, yeah i wouldnt use fluffy directly. it is a meme. a 0.3 mix works good. some people mix it with base64 first then another model. or you can try find a preexisting mix that semi works and then mix that with your favorite model.
>>22473 >a 0.3 mix works good. some people mix it with base64 first then another model. or you can try find a preexisting mix that semi works and then mix that with your favorite model. is it weighted merge? Maybe there's a good way to do block merges with furry models? so the style doesn't transfer as hard but some nsfw concepts are reinforced? I haven't done block merges myself though.
not nsfw but I was thinking about trying to extract the game's sprites and finetune retrodiffusion with them if it's feasible also thank you moriya shrine, i'm glad i didn't give them a cent after seeing this
>>22471 I havent been keeping up but that guy locally only has a 3090TI, but he was talking about trying to get an 80GB A100 so thats pretty much your spread. An A100 per his word cuts down the training time of what you could do on a 3090/24GB card down to 25%
>>22475 if you pick that option she instantly kills you btw
>finetune retrodiffusion just finetune 1.5 with retro/sprite art
>>22477 i wish it did
(1.48 MB 1024x1536 00038-784199198.png)

(1.96 MB 1024x1536 00121-1357503694.png)

(1.88 MB 1024x1536 00183-438589309.png)

(1.56 MB 1024x1536 00147-1357503694.png)

(1016.25 KB 1024x1536 00049-3282373258.png)

(1.10 MB 1024x1536 01284-1774648073.png)

(1.48 MB 1024x1536 00040-2010889611.png)

(1.41 MB 1024x1536 00146-3987139328.png)

>>22480 neat
(2.09 MB 1024x1536 catbox_dvbc5n.png)

>accidentally discovered how to do a nasty fake rotoscope effect >can't heckle the /h/ thread with it because i'm banned for racism again
(1.53 MB 1024x1536 00007-1216181872.png)

(1.70 MB 1024x1536 00095-1185562620.png)

(1.53 MB 1024x1536 00027-2756586311.png)

(1.62 MB 1024x1536 00192-1057900977.png)

>>22483 >accidentally discovered how to do a nasty fake rotoscope effect we're one step closer to reviving Cing
(1.35 MB 1024x1536 00058-2558350289.png)

(1.54 MB 1024x1536 00078-465074114.png)

(1.95 MB 1024x1536 00020-2073675829.png)

(2.36 MB 1280x1920 00079-288461756.png)

>>22484 Neato, hopefully I'll find some time to actually test all these new styles during the weekend.
>>22456 god damn, got a catbox then? looks great without loras
(1.69 MB 1280x1600 00336-4158656651.png)

(1.79 MB 1280x1600 00340-4158656651.png)

(1.95 MB 1280x1600 00501-4158656651.png)

(1.70 MB 1280x1600 00558-4158656651.png)

>>22488 Here ya go https://files.catbox.moe/dtdpc4.png It's crazy how flexible these are compared to NAI. If the same thing is transferred over to an anime model it would be amazing though it definitely wouldn't kill the need for loras. That said I haven't been successful training loras on vpred/zsnr models so far, just keep getting block size mismatch errors
>>22489 but can it do knots
(647.76 KB 640x800 catbox_cocp6l.png)

>>22490 Surprisingly not that great. Here's the best that I could get for you
>>22491 i laughed
>best friend across the world took time off work to catch up on games and anime with me on my birthday so that i wouldn't be too lonely (i'm not doing anything this year, all my friends moved closer to their universities) alright i'm building this goddamn fucking rig for real this time
(1.81 MB 1152x1152 catbox_vzggpa.png)

(1.60 MB 1440x960 catbox_8abrnn.png)

(1.42 MB 1152x1152 catbox_3l5ver.png)

(1.40 MB 1440x960 catbox_4i7iz0.png)

>>22489 Fluffyrock's (and by proxy Easyfluff's) ability to recognize characters and somehow distinguish subjects so much better than anime models is insane. No pruning of artist tags is huge and the gloves are off for native support of loli/cub. I've been playing with Easyfluff for about a week or so, considering porting more of my anime/human LoRAs to it. Here's some of the more complex sex txt2img gens I've done, only LoRA I used for these examples is one I made for the character in the last image.
>>22494 >considering porting more of my anime/human LoRAs to it I'm considering making a lora of my own anime gens with a set style to make it usable on there, to finally achieve peak futa
>>22495 Here's some human futa gens I made on EasyFluff for proof of concept of how crazy good it is at futa: >>22496 These don't even take advantage of all the autistic cock and ball tags e621 has which all work perfectly. I'm also using EFv10 which has a bit of a style bonus that's apparent with humans, 9 and 11 are less noticeable and more anime-esque.
>>22498 Do you have any tips for lora training on FR? I'm tying to use easyscripts but it looks like I may have to switch to something else. I know you need --v_parameterization --zero_terminal_snr --scale_v_pred_loss_like_noise_pred I've tried both the megares, zsnr and even easyfluff models. I think you mentioned you used prodigy but I'll probably leave the rest of my settings as usual just to make sure everything works first
>>22499 This is the specific checkpoint I'm training on since it's the one used by EasyFluff: This is the version of fluffyrock used by EasyFluff if you plan on training furry LoRAs with the intent of genning on EF: https://huggingface.co/lodestones/furryrock-model-safetensors/blob/main/fluffyrock-1088-megares-terminal-snr-vpred/fluffyrock-576-704-832-960-1088-lion-low-lr-e159-terminal-snr-vpred-e132.safetensors Those command line args should be the only special thing you need for training on vpred. I don't use easyscripts but you might have to manually edit them in to where ever the script inserts model arguments, I had to do this with linaqruf's Colab notebook because it hasn't been updated for vpred. I switched to Prodigy because I'm lazy and also don't have time to fuck with optimizer LRs and fluffyrock feels much more temperamental than NAI. If you're getting results but they're weird be aware FR 10 has a strange style bias especially on humans, this doesn't seem to be the case with 9 and 11. Those Zankuro previews were made on 11.2 because 10 kept giving them noses.
Can someone do another Ootomo Takuji lora because the ones I found are kind of lacking
>>22500 NTA, but wanna try it too. What about tagging the dataset, how to do it properly? I guess I could find other hyperparams inside one of the loras. How long you were training it?
(2.36 MB 1280x1920 catbox_b2ctde.png)

I've got a temporary workaround for training vpred-zsnr loras with Easyscripts. Right now you can only enable v-param and scale v pred loss by also enabling SD2.X, but forcing the v2 parameter to false still works. Open main_ui_files/GeneralUI.py and edit line 141 to self.args['v2'] = False Then just make sure you enable the following: GENERAL ARGS - SD2.XBased - V Param - Scale V pred loss OPTIMIZER ARGS - Zero Term SNR Sorry for not providing a real solution but hopefully it's a simple fix >>22502 Either take the tags from e621 for images that already have decent tagging or use the autotagger
>>22503 > autotagger Is there any model or extension that could do e621 autotagging? It seems different from what I saw on danbooru/gelbooru.
>>22505 Here's a reupload of the latest on I found so you don't have to go through Discord https://pixeldrain.com/u/r9Dszb7v
https://huggingface.co/JosefJilek/loliDiffusion/blob/main/loliDiffusionV0.11.10_Based67_1.0M4-CLIP_VAE_PURIFIED.safetensors The loli difussion guy used B67 to make a new loli model thing. I thought it was interesting so I'm posting it here.
(1.14 MB 1024x1536 00533-4076114841.png)

(1.52 MB 1024x1536 00061-3133771999.png)

(1.50 MB 1024x1536 00097-3375669288.png)

(1.49 MB 1024x1536 00375-205461233.png)

(823.11 KB 1024x1536 00020-3410729466.png)

(1.17 MB 1024x1536 00019-200603873.png)

(1.41 MB 1024x1536 00965-179785737.png)

(1.52 MB 1024x1536 00154-3080804589.png)

time for a new bread?
>>22510 no? why
>>22506 Thanks, but how to use it? With this https://github.com/picobyte/stable-diffusion-webui-wd14-tagger extension it says that there is no project.json file.
>>22511 thought we're doing it every 1000 posts now. nvm
Anyone have a good Elaina lora that can do NSFW and hair? Most of the ones in civitai seem bad or burned from when I checked
>too late to caught the based67 p7 I'm ngmi bros
>>22513 wasn't post limit 1200 anyways?
>>22517 It was removed, last thread had to be manually locked.
>>22512 Put it in models/TaggerOnyx
>>22516 I wonder if you can sort the files by added dates on huggingface.
does anyone have a list of the tags recognized by the wd1.4 taggers specifically for character names? getting a little tired of having to manualy go through each dataset and delete all the character name / copyright tags
>>22508 cute >>22520 cute
>>22508 not a request, just another style I feel like you might like: https://gelbooru.com/index.php?page=post&s=list&tags=shinmon_akika Although a lot of pics seem kinda small
>>22521 catbox for the chair?
nai update: not better than local but it's got character
(588.75 KB 1024x1536 00042-2478117552.png)

(1.03 MB 1024x1536 00083-2618274852.png)

(1.44 MB 1024x1536 00126-4238628939.png)

(671.29 KB 1024x1536 00168-334969034.png)

(1.15 MB 1024x1536 00041-3386292153.png)

(1.45 MB 1024x1536 00070-414817956.png)

(1.69 MB 1024x1536 00461-1909954080.png)

(1.76 MB 1024x1536 00046-2486058561.png)

>>22532 Another cute style
>>22530 Slutbox of this one, so hot
>>22503 Oh yeah, I've noticed people are doing that, I'll go ahead and update the UI so that vpred doesn't need v2 selected, I just never actually thought about them being separate. on that note, any other args not present in my UI that should be added?
Is there an up to date way to extract style from a model? or can I still use old Derrian-Distro's scripts to extract Lora/Locon?
>>22538 that should still work, yes
(1.54 MB 1024x1536 00034-1224156410.png)

(1.47 MB 1024x1536 00172-4035632704.png)

(2.08 MB 1024x1536 00048-1983990442.png)

(2.07 MB 1024x1536 00120-467818061.png)

(1.77 MB 1024x1536 00020-3904321727.png)

(1.83 MB 1024x1536 00024-2387287161.png)

(2.00 MB 1024x1536 00081-4092435138.png)

(2.19 MB 1024x1536 00045-1297000017.png)

(2.20 MB 1024x1536 catbox_5jceaw.png)

(2.02 MB 1024x1536 catbox_f4gesm.png)

(2.10 MB 1024x1536 catbox_g6qpz8.png)

(2.41 MB 1024x1536 catbox_1v6pr6.png)

despite its overall lower quality, naigen seems a lot more responsive regarding composition. i can imagine getting some mileage out of it just generating base images for controlnet.
>>22542 Haven't messed with it at all and I've heard practically jack and shit since I got the trannycord ping, is V2 actually a step up over what we can do? Is it just a new model?
>>22543 new finetune i think, supposedly still ultimately based on sd 1.5 i haven't played with it enough to give any hard judgments. it can natively do higher resolutions and seems overall better than the previous model, but it still suffers from the weird NAI spiderwebbing and fishstick fingers and output quality drops off fast with lower resolutions. i think it's unlikely to surpass localgen in quality regardless of whatever keyword tech gets discovered. notably, NAI now stores metadata in the alpha channel, so 4chan doesn't strip it.
I died from burn out for the past couple weeks Anything I missed?
>>22524 the WD taggers has a list of tags it knows attached to it, that being said, here is the extracted list https://files.catbox.moe/3cegkv.txt
(1.41 MB 1024x1536 catbox_uyc1fn.png)

>>22536 pushed the update, it's on the dev branch, I'd like to try and fix a few more bugs before I push to main, but it's stable, and definitely works, considering this is my first attempt at making a lora using those args
>>22548 their marketing should be "it's not fagdroid" exclusively, they'd sell even more units
>>22546 thanks however i'm more curious what you used to only extract character tags from the list
>>22550 Not him but this tagger script can disable copyrighted characters https://rentry.org/ckmlai#ensemblefederated-wd-taggers As I have mentioned before several times however, the "undesired tags" doesn't work so if you already had a preset list of tags you are constantly removing, shit doesn't work but it won't tag character names so its at least a good trade off. If someone can fix it or figure out what the syntax should be to get it working that would be great.
>>22551 well it's really simple to write a script to remove tags listed in a .txt, so i just wanted a list of copyright tags that wd1.4 recognizes. the txt file the other anon gave is a good starting point but doesn't contain all the tags i need to blacklist
>>22550 my bad, I took your wording as wanting only the character tags, which was easy to extract because they are their own category, it would take more work to get other tags that are "general" tags, so I went with the lazy approach
Testing a lora for that new brat from the pokemon anime I realize I have a multitude of artist loras and I only recognize half of them by name
>>22554 I ended up getting some rng magic with this last one Ask for naked collar in public, get naked dogeza at a shirtless cunnysseur cult meeting
(883.19 KB 640x960 catbox_ekajls.png)

>>22554 >>22555 Cute Sangos
>>22554 i have like fifty danbooru tabs open on my sd window just to check who the hell these people are
>>22554 Here is the lora, still needs some improvement, I'll keep an eye out. https://mega.nz/folder/dfBQDDBJ#3RLMrU3gZmO6uj167o-YZg/file/ZapnBZhI
>>22555 oh no this mesugaki is about to be corrected by a crowd


Forms
Delete
Report
Quick Reply