/hdg/ - Stable Diffusion

Anime girls generated with AI

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 10.00 MB

Total max file size: 50.00 MB

Max files: 4

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

US Election Thread

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(1.13 MB 960x1320 bread.png)

/hdg/ #19 Anonymous 10/24/2023 (Tue) 21:26:15 No. 22560
>>22562 Easyfluff's style isn't something I enjoy but it really is pretty handy for sex scenes
(1.39 MB 768x1152 char-prushka-fr.png)

>>22563 FR style is completely dependent on what artists you prompt since it doesn't prune artist tags. However, I'm planning on making a beefy LoCon for fluffyrock that's a bunch of mixed anime styles so it can basically be used as a compatibility layer to prompt anime-style illustrations with the coherency and sexual awareness of FR. I'll probably shoot for ~10 artists to start with, I already have the furryrock-prepped datasets for Koishi Chikasa and Zankuro, next will probably be cromachina and Kakure Eria since they have lots of uncensored loli, then I'll try to get some more "generic" styles to fill the rest. I'm actually into furshit so I've basically been using nothing but Easyfluff recently and I want to prompt anime stuff too but I can't go back to NAIslop. Fun fact, you can actually train human characters just fine on FR, too. Picrelated was my test model and you can find her in the MEGA.
>>22562 Lol Nice, how is this possible? Also is ita good idea yo train a character, is it possible too?
>>22565 it's already been defeated so they are wasting their time
(2.18 MB 1280x1920 catbox_cpw6xs.png)

I didn't think I would ever be able to get results this good without regional prompt
Why does adetailer keep fucking breaking and giving me black boxes?
>>22570 Probably NAI vae sperging out?
(1.57 MB 1024x1536 00009-1707324729.png)

(1.92 MB 1024x1536 00020-1583878192.png)

(1.84 MB 1024x1536 00362-1101478564.png)

(2.14 MB 1024x1536 00146-3361713661.png)

(1.86 MB 1024x1536 00088-1162196021.png)

(2.01 MB 1024x1536 00380-1522671478.png)

(2.08 MB 1024x1536 00017-1337730183.png)

(2.16 MB 1024x1536 00023-2852853649.png)

>>22573 Looks like another style that goes well with traditional media.
(2.09 MB 1600x1280 00650-3275237053.png)

It's funny that no matter the model, AI just thinks this is what kissing looks like
>>22576 I never had that issue
(2.35 MB 1600x1280 00812-4253284050.png)

(2.46 MB 1600x1280 00872-1645535792.png)

>>22577 Maybe just be a skill issue then
>>22576 Hey this style looks very old doujin-like, catbox pls
(1.10 MB 960x1440 catbox_xgcion.png)

(1.40 MB 1248x1248 catbox_udqf47.png)

(1.38 MB 1248x1248 catbox_pww3ah.png)

(1.10 MB 1440x960 catbox_332gnl.png)

>>22572 more random easyfluff possummachine gens locon here: https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/LlpSkIJD
>>22582 How??? Just how??? I tried to train a lora once in easyfluff and I just got like a brazillion errors and a headache Also have you tried to train a character in there?
>>22583 >How??? Just how??? I tried to train a lora once in easyfluff and I just got like a brazillion errors and a headache I have some info in my MEGA fluffyrock folder that should help with that >>22583 Yes, see >>22566
(1.35 MB 1280x1920 01027-1119526879.png)

>>22583 Do you mean easyscripts or easyfluff? Because you wouldn't want to train on easyfluff anyway
>>22575 they're my favorite to work on lol
(1.14 MB 1024x1536 00016-4293712623.png)

(1.67 MB 1024x1536 01641-3801607332.png)

(1.72 MB 1024x1536 00138-821317654.png)

(1.73 MB 1024x1536 00035-741425422.png)

(1.45 MB 1024x1536 00031-2413011759.png)

(1.57 MB 1024x1536 00008-2987930599.png)

(1.86 MB 1024x1536 00386-1711285958.png)

(2.76 MB 1280x1920 00082-180599598.png)

>>22593 How to get good genitals? The anus I gen looks crappy and sometimes pussy too, and I use cleft of Venus v9
>>22594 What model are you using? I'm using b64 and all I have done is pussy and anus in the prompt with spread pussy in negative.
>>22595 I use b64 pruned, hmm, what could it be
>>22596 Probably depends on style lora as well
(3.66 MB 1536x2304 catbox_lttt3b.png)

(5.12 MB 1920x2880 catbox_d5cyfg.png)

(1.47 MB 1024x1536 catbox_ihxozn.png)

(1.30 MB 1024x1536 catbox_71kuxn.png)

(6.66 MB 1920x2880 catbox_r0ebxj.png)

(5.85 MB 1920x2880 catbox_mws15l.png)

(8.24 MB 2560x3840 catbox_n2lh2l.png)

(3.83 MB 1536x2304 catbox_7oym40.png)

(3.17 MB 1536x2304 catbox_bqlpw7.png)

(4.60 MB 1920x2880 catbox_5vhjsw.png)

(4.25 MB 1920x2880 catbox_8ab1fu.png)

(2.57 MB 1536x2304 catbox_d9l7m4.png)

(1.38 MB 1280x1600 catbox_72lmdw.png)

Clothing tags are lacking on fr though it doesn't seem to matter much
(1.64 MB 960x1440 catbox_2a04de.png)

(1.49 MB 960x1440 catbox_hylur3.png)


(1.46 MB 960x1440 catbox_ytfmao.png)

>>22602 that seems to be the one major weakness of fr prompting, e621 doesn't have very comprehensive clothing tagging compared to danbooru. otherwise yeah, if you're training a character or whatever it doesn't matter much since you probably want their outfit or a variant on it which will work fine.
(1.60 MB 960x1440 catbox_p841rz.png)

>>22602 also holy shit i just noticed nahida has proper pupils in that
(1.23 MB 1600x1280 00341-107628434.png)

(1.67 MB 1600x1280 catbox_tmh79n.png)

(1.71 MB 1280x1920 00178-3138630404.png)

(1.77 MB 1280x1920 00132-2172220286.png)

>>22604 It's pretty easy to just prompt for pupil shapes like heart eyes or spiral eyes
>>22606 Coming right up
(3.77 MB 1280x1920 catbox_v3mz1l.png)

>>22607 Thanks catbox extension
(2.73 MB 1280x1920 catbox_xz8xy3.png)

(2.27 MB 1280x1600 catbox_qd8qp0.png)

>>22612 I have a feeling that putting her in a box is just going to anger her even more.
(1.45 MB 840x1440 catbox_e1hh4d.png)

GENTLEMEN, BEHOLD BLOOMERALLS
>>22614 science has gone too far
>>22617 I require box for all of them please
>>22606 Very cute, catbox?
>>22599 >>22600 >>22601 these truly bring a grown man to tears
(1.96 MB 1024x1536 catbox_rbd9p5.png)

uh oh looks like someone knocked up their sleep paralysis demon
I need some of these catboxes for those that aren't linked already.
>>22629 >Infinity-chan hasn't even tried beating 4chin to the punch in implementing AI EXIF data for images Cmon man
(1.53 MB 1200x1488 00047-742.png)

She's on heat Hi lads, been lurking /hdg/ for a while now, thanks for existing, first post
>>22627 What style is this? Looks really good
>>22631 Welcome, Love Sylvie. She's a cutie that deserves a lot of head pats.
>>22632 a model named 'kakigori v3' plus the dreamwave/borscht/maco22 loras https://civitai.com/models/100505/kakigori https://civitai.com/models/62293?modelVersionId=159804 https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg https://mega.nz/folder/oUokGaJR#OWRSRLgTrr9MrbPzkAFfYA really i just borrowed a few things from the antlers anon's prompts on aibooru. it seems like a good combination for painterly backgrounds the borscht lora is really good for skinny loli-type bodies with midriffs/back arches
(2.15 MB 1280x1696 catbox_vvitvx.png)

>>22603 Catbox for the third one?
(1.32 MB 1440x960 catbox_kpyc4r.png)

(2.34 MB 1248x1248 catbox_a0bk3f.png)

(1.90 MB 960x1440 catbox_6tfagi.png)

(1.46 MB 960x1440 catbox_22o56w.png)

>>22635 not sure why only that one failed to catbox, have some more EF lolis as a bonus
>>22637 It came out very nice, lad!
Shame the resize by slider doesn't work with inpaint area only masked when inpainting, so that you'd inpaint that part of the image at native res when set to 1.
>>22640 what model is that?
>>22642 holy shit dude
>>22642 >>22643 it's mostly the same combination i used in >>22634 but tbf i swapped out the borscht lora for the kaamin one i made instead it is powerful
Is ufotable finetune guy alive?
(1.65 MB 960x1440 00052-2561253867.png)

(1.70 MB 960x1440 00062-2395201037.png)

(1.97 MB 1024x1536 00304-2861808783.png)

(2.08 MB 1024x1536 00114-1514664074.png)

>>22649 nice earrings on long ears Also, for any anon that was also not aware: apparently some time ago xformers became deterministic! I've now finally started using 'm
>>22644 got a link for the kaamin lora?
>>22646 >>22649 damn those are nice elves
(503.91 KB 1280x1536 catbox_1cqxgs.jpg)

(491.55 KB 1280x1536 catbox_x7e4d6.jpg)

(461.53 KB 1280x1536 catbox_x0vrg1.jpg)

(516.32 KB 1280x1536 catbox_7nbb8f.jpg)

I remade my old wagashi style lora, dunno if someone else has already made one but I'd rather use my own loras any way The first two make use of controlnet lineart with actual wagashi art as input, the other three do not involve controlnet https://mega.nz/folder/dfBQDDBJ#3RLMrU3gZmO6uj167o-YZg/file/AKAVgD4S
Full Nelson/Reverse congress is so cursed as a prompt The bodies of the two characters always end up overlapping so the AI always shits itself trying to figure out whats part of the girl and whats part of the man
>>22655 >Controlnet LineArt What is zat
(8.95 KB 768x80 20231107212128.png)

>>22658 What doez it do, how doez it worzk
(1.50 MB 1024x1536 00016-3857367516.png)

(1.63 MB 1024x1536 00195-2590998771.png)

(1.66 MB 1024x1536 00167-4008008917.png)

(1.78 MB 1024x1536 00367-885505953.png)

(1.75 MB 1024x1536 00068-3685890714.png)

(1.68 MB 1024x1536 00282-1542714572.png)

(1.73 MB 1024x1536 00072-1503231480.png)

(1.76 MB 1024x1536 00344-3889378717.png)

(9.62 MB 3072x4608 catbox_sfac5h.png)

Which checkpoints are considered the best right now?
>>22660 do you ever proompt or only train? saved quite a few of your preview images.
>>22660 Nice, the backgrounds are quite cozy with this style. >>22662 I'm still using B64 most of the time but I do use B67 as well. Not a fan of Counterfried-v3 and other mixes like it.
>>22662 >bottomheavy which lora is that
>>22666 Catbox for the first one please? Very nice
>>22671 Box for that one?
(1.70 MB 1024x1536 catbox_rji5ev.png)

has anyone tried to train an style locon or lora with prodigy if so what settings did you use?
(2.27 MB 1280x1920 catbox_1ebq4a.png)

Man it does feel like colab kill slowed down those threads considerably. Sad.
some NAI gens on the updated model. not as good as A1111. their from-scratch model might be good though.
NAI announced their new model based on SDXL https://twitter.com/novelaiofficial/status/1723025509578162549 Seems like it's very nsfw capable although a lot of work had to be done to tard wrangle it
>>22679 i've found it's a lot more responsive to composition tags than local is. quality isn't nearly there of course, but something like this would take 3-4 inpainting steps to do on local.
(2.05 MB 1024x1536 catbox_lul1ay.png)

(2.24 MB 1024x1536 catbox_pf8xxb.png)

(1.85 MB 1024x1536 catbox_wwpexe.png)

(1.78 MB 1024x1536 catbox_q10ia0.png)

>>22684 yeah, agreed. great concept btw
>>22684 >>22685 Looks actually nice but I can't imagine attaching my personal info to cunny if only nai accepted monero
(1.83 MB 1024x1536 catbox_fb15er.png)

(1.84 MB 1024x1536 catbox_7s1bt7.png)

(1.93 MB 1024x1536 catbox_78sbbg.png)

>>22686 I'm paranoid about that shit too. But considering NAI started up in response to AID censoring and inspecting text gens, I think they have some cred. I also figure it's in NAI's best interest to keep customer gens private since it protects them as well. (last dildo spam from me)
>>22688 catbox?
>>22690 woah what is this model. that tomoyo and sakura look better than anything I've seen on civit too
>>22681 Wonder how much SAI paid NAI to fix their model
(1.77 MB 1024x1536 catbox_noslxm.png)

(1.66 MB 1024x1536 catbox_051fif.png)

(1.70 MB 1024x1536 catbox_jt03l5.png)

(1.68 MB 1024x1536 catbox_1snh3t.png)

>>22686 i was using their text service long before they added imagegen, if anything hangs me it's gonna be that
>>22695 Try DWpose for controlnet instead, it should work more precisely
>>22697 Isn't that only a preprocessor?
>>22695 Well, best one I could do.
(1.79 MB 1280x1280 catbox_bc6h7k.png)

>>22691 <lora:Card Captor Sakura Anime:0.2:0.4><lora:cardcaptorSakura_sakuraEpoch6:0.2:0.4> https://civitai.com/models/187234/clamp-card-captor-sakuraanimation-version-artist-style https://civitai.com/models/20451?modelVersionId=24316 Output is possible by adjusting the blend ratio of the two.
(1.61 MB 960x1280 catbox_yethk9.png)

(1.73 MB 960x1280 catbox_gi4rwq.png)

(1.51 MB 960x1280 catbox_acz4mc.png)

>>22701 is the checkpoint a custom mix?
>>22699 that came out well. trying for very specific stuff is often frustrating tho
(10.41 KB 704x601 scribble.png)

(988.11 KB 2112x1536 catbox_7sxdud.jpg)

>>22699 putting your good image through scribble seemed to work way better than the pose controlnet
>>22705 I've preferred depth and lineart over pose for quite some time now. Pose just seems to not really work how it's supposed to, just incidentally works well enough most of the time
>>22706 I've only ever gotten use out of openpose for framing multiple subjects. It doesn't really work well for actual posing.
(457.61 KB 704x512 catbox_glydvj.png)

(522.31 KB 704x512 catbox_shb6co.png)

(5.74 KB 704x512 from-side-cowgirl_pose-m.png)

(4.63 KB 704x512 from-side-cowgirl_pose-f.png)

>>22705 >>22706 Didn't think scribble would be this good. I also considered lineart or canny, but went with depth. I did it by generating the boy and girl solo using pose, picked the best ones and photoshopped them together, then generated a depth map from that, which I further refined using levels so that both were the same level of depth with no background. Then I generated a bunch of images using the depth map, picked one and did further photoshopping/inpainting to get the final result.
>>22709 I didn't save the the image I used for generating the depth map sadly.
>>22709 >>22710 a lot of work to get there, and a good result. sometimes satisfying to do a high effort gen. >>22706 controlnet almost motivated me to try learn to sketch
(1.07 MB 1024x1536 00179-429167615.png)

(1.47 MB 1024x1536 00013-3623282045.png)

(1.65 MB 1024x1536 00026-599786999.png)

(1.85 MB 1024x1536 02050-202391918.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added yotsumi shiro >>22663 i mean i prompt the preview images lol. i train more these days though when i have the time
(800.74 KB 1024x1536 00228-2958708275.png)

(1.20 MB 1024x1536 01041-2019121981.png)

(1.43 MB 1024x1536 02455-679728863.png)

(1.36 MB 1024x1536 00288-1958441762.png)

Made an Uekura Eku lora If you don't add any background tags (indoors, outdoors, room, etc) it'll add flowers, stars, shapes, etc around the subject, even more if you prompt simple background and full body
>>22714 Every time I make a lora I have to remind myself to actually grab the link https://mega.nz/folder/dfBQDDBJ#3RLMrU3gZmO6uj167o-YZg/file/tXBVQBoQ
>>22706 Do you draw something yourself or do you use an existing image to extract something from? And how do you deal with details like background or hairstyles, if you want different one?
>>22717 I use existing images and just edit them. Lineart is extremely easy to modify, especially if you have "My prompt is more important" ticked. This option ignores most heads and hairs grabbed through controlnet, unless it's something like a side ponytail. You can just erase all/most of the hair and it will work fine. For depth, replacing the entire head with just a vaguely head shaped circle works just fine.
(3.15 MB 1280x1920 catbox_nj873d.png)

>>22714 Very cute!
(1.67 MB 1024x1536 00078-947572358.png)

(1.75 MB 1024x1536 00022-191438142.png)

(1.40 MB 1024x1536 00031-538760179.png)

(2.18 MB 1024x1536 00139-2666574936.png)

(1.71 MB 1024x1536 00090-2633401958.png)

(1.83 MB 1024x1536 00053-2225401971.png)

(1.87 MB 1024x1536 00020-2663577845.png)

(1.62 MB 1024x1536 00322-3130714120.png)

(1.51 MB 960x1280 catbox_g23njt.png)

(1.65 MB 1280x960 catbox_3qxjtq.png)

(1.69 MB 960x1280 catbox_9slprn.png)

(1.68 MB 1280x960 catbox_bgxruz.png)

>>22722 Yeah, in a couple of hours when I'm home.
>>22723 hot. if we ever get to the point of being able to prompt entire hentai episodes, first on my list is "young witch academy: tentacle attack!"
(4.45 MB 3072x768 11111111111111111111.png)

(530.69 KB 512x768 catbox_0foony.png)

(544.95 KB 512x768 catbox_5bqh7n.png)

(532.44 KB 512x768 catbox_511syw.png)

is there a tag for these hip indentations? best I could find is bare hips, lowleg
(2.46 MB 1280x1920 catbox_wq1yx5.png)

(3.31 MB 1280x1920 catbox_5iel41.png)

(3.13 MB 1280x1920 00094-1399387319.png)

(4.18 MB 1280x1920 00082-558537454.png)

>>22726 Groin
>>22727 thanks!
(2.41 MB 1024x1536 catbox_p4lzic.png)

>>22722 https://files.catbox.moe/n7k9mp.png I'm not sure which High Elf ranger LoRA it is but you can find it on friedAI. I tested like 3 of them, and they all were pretty fried.
>Sad Panda now charges GP to download from galleries that are over a year old what the flying fuck?
>>22730 That's been for a while though?
>>22732 Only the super good shit had mandatory GP, now its everything that's over a year old. I was slowly using my free viewing limit to batch download galleries I had favorited for dataset purposes but now there is 4-5 figure GP costs on some of these galleries and its not peak hours. The only way around it is on single full resolution image downloads but fuck that's a drag and not sure how far I can go before I get limited.
>>22733 Doesn't everyone have trillions of GP Do you need a donation anon
>>22734 >Doesn't everyone have trillions of GP how
>>22735 Been running an H@H client myself for several years and have never had an issue with any of the currencies, don't really see anyone bringing them up either
>>22736 I was planning on getting an H@H set up. Need anything in particular or just set up their application?
(1.77 MB 960x1440 00042-2746422487.png)

(1.70 MB 960x1440 00125-1184479673.png)

(1.76 MB 960x1440 00098-4229741401.png)

>>22729 Bless you and your prompts
>>22731 May I ask for the brownbox of the first three
>>22737 All requirements should be detailed on the H@H page
once again I am willing to give a donation, assuming you don't really care about having to say what your e-hentai username is
I use a custom mix of futanarifactor and based67/66/64 that seems to be completely incapable of generating flaccid penises, so I made a lora for it (all other loras I see make me want to die) It's called smallpenis because the training data was almost all small but it seems the penis just gets scaled relative to the body so it doesn't really end up small at all sometimes. 7 and 8 should work with vanilla futanarifactor https://mega.nz/folder/dfBQDDBJ#3RLMrU3gZmO6uj167o-YZg/file/Jf4wlZKY Here are examples including my moisture lora. Warning: wenis and fat (not that fat)
nai 3.0 is up thirty second impression: seems a lot cleaner but prompts differently enough that it's gonna take some experimentation
>>22747 welp, seems pretty good
well they definitely didn't purge artist tags this time
>butthole peek behind a thong >"cum in anus" puts cum in the anus >handholding blowjobs >hands-on-head nose-to-pelvis irrumatio yeah it's definitely a step up in composition
one eye closed, 100% success rate strap pull, 25% success rate tanlines, one-piece swimsuit, adjusting clothes, mostly misses
>>22593 Could you upload catbox?
perspective upskirt: easily ring gags: nope ball gags: yep handcuffs: still hit or miss tentacles: somewhat improved, does the swallowed limbs thing without much begging viewer holding leash: surprisingly, yep nai v3 doesn't seem to have a preferred style nearly as strong as v1, i'm looking forward to seeing what shakes out of it in the coming weeks
and now a word from our sponsors
I'm trying to create a character Lora with a character with a wide variety of expressions, would it help for proper generation if I tag each expression in the dataset (I think I have 5-10 references for each)
>>22752 Can do but I can't find txt2img for the ancient anyv3 gen though. https://files.catbox.moe/w9pap1.png https://files.catbox.moe/cai5k0.png >>22754 Impressive, even with the hooters LoRA it could fuck up the text even more than that. I have a few gens that ended up with "rioter" instead of anything even close to hooters.
>>22753 Very impressive, there are lots of upgrades honestly, imagine if we could have this checkpoint leaked... Lora training would've never be the same
(2.21 MB 1280x1920 catbox_t47ej0.png)

>>22754 Finally it looks like NAI is brought up to what should be standard by now
anal beads, check wrist grab doggy style, check vibrator in thigh strap, check anus cutout denim shorts, in my opinion the real test for a model, 20% success rate
x-ray is hit or miss mizumizuni is very on point feels like they've pushed the post threshold for the model to have a vague idea what you're asking it for down to maybe 150 pictures, vs ~800 for v1. mokuren has 103 currently and it has like 20% ballpark accuracy with supporting tags. done flooding the thread for a while, unless anyone has something they want me to teast
been away for like a month, anything new?
>>22759 any chance for a catbox on those?
>>22763 nai stores their metadata in the alpha channel, you can just save them and drag&drop
Does NAI 3.0 do nipple and vulva piercings well?
>>22764 fuck yeah that's good to know. really nice gens and experiments.
>>22761 here's hoping this gets leaked as well
>>22767 Sadly it will never be. It is SDXL tho
I miss the wild west phase bros
>>22765 nipple piercings it can make a decent attempt at, pussy piercings seem hopeless >>22769 don't worry, this is gonna look like the wild west in retrospect compared to what's coming lol
>>22770 Is Sylvie from Teaching Feeling trained on SDXL anon? I really doubt it but I got my hopes up
>>22773 about 80%, needs some supporting tags surprisingly it does ray-k's style pretty well but draws a lot harder from his other games. nobody tell him, he'd hate it.
>>22774 HOLY SHIT I'M GONNA KILL MYSELF IF NAI 3.0 DOESN'T LEAK IN THE NEXT 30 SECONDS
>>22774 Post some of Ray's style please, try it with different characters
>>22776 it's pretty hit or miss, any character that's too close to one of his gets fully abducted by it and his style isn't strong enough to break a well-established one still a lot of prooompting ahead to get a better sense of how this works
Why is Based64 considered the "de-facto" checkpoint?
>>22778 flexible, well-behaved, handles loras well
>>22777 is it basically 25 bucks or nothing if you only wanna do image gen with their service? I assume there's plenty of flukes so ~200 generations you get from 10 bucks tier is barely enough?
>>22780 pretty much, yeah. dimensions have a big impact on image composition and quality like always, so you want to go large and even the $15 tier is only good for a day or two of serious proompting.
>>22781 Thanks. I guess I'll consider it. Meanwhile, could you post more Kuro please?
(2.09 MB 960x1440 catbox_949rly.png)

(2.15 MB 960x1440 catbox_wa008c.png)

(2.18 MB 960x1440 catbox_k480dx.png)

(2.24 MB 1248x1248 catbox_vw6m6t.png)

nu-nai not pruning artist tags is huge but unfortunately unless the model leaks i cannot be fucked to care, training loras for poorly grasped characters and concepts has become so integral to the experience of generative AI for me that i can't see a point in engaging with the technology without it
Can NAI 3.0 do Naizuri? (A flat Chested girls trying to do a paizuri)
>>22783 Nice
>>22785 This guy really thought he had to explain naizuri here
>>22787 Sorry, I'm used to deal with huge-tits-fags retards.
>>22787 I still haven't genned any naizuri, I should do that.
>>22778 it's called based64
>>22791 >>22792 Man they really did a good job with nsfw stuff it seems. Interestingly, they seemed to completely nuke the backgrounds compared to other SDXL models.
slowly approaching quality >>22784 it wouldn't surprise me if i end up using nai v3 just as a way to generate controlnets for local. i enjoy mixing loras too much to go back to fifteen million curly brackets but man the improvements in anatomical understanding are just so nice.
>>22762 guess not
>>22798 saw that 2.0 is out, any reason to actually use it? seems like the new high res modes will eat up your credits really quickly
>>22799 2 was a mild improvement, 3 is sdxl based and a big step up in anatomy. they also didn't purge artist tags this time.
>>22800 there's a 3 already? didn't 2.0 come out like a month or two ago?
>>22801 yeah, i think 2 was a test run of their new training set. it's definitely not just danbooru2021 anymore.
>>22802 holy shit yeah, what the fuck wish they updated shit this fast back when i was a paypig time to test it i guess
Does NAI 3.0 knows Wabaki?
>>22804 doesn't look like it. i'm pretty sure the new corpus is still danbooru-based because it seems like the loli artists it knows are the ones that have a decent amount of non-loli or non-explicit work. fingers crossed that they train a tagging model on top of 3.0 and start expanding their material.
(2.19 MB 960x1440 catbox_o2ptey.png)

(2.10 MB 1440x960 catbox_t1dv9h.png)

(1.76 MB 960x1320 catbox_oxkwr4.png)

(2.12 MB 1440x960 catbox_bjqy75.png)

>>22797 It's been closer to two months but I am still just warping fluffyrock to do what i want because it's a far superior 1.5 base model to do 2d stuff on compared to NAI. I think if NAI were smart they'd hang on to their XL model for a month or two and then release it publicly so people could train LoRAs to upload for web users, but that is admittedly very wishful thinking.
>>22806 for better or worse, NAI has their eye entirely on the "don't end up in front of a senate inquiry for 'virtual Democrat activism' " ball. user loras are definitely a never ever. wouldn't be surprised if they released 2.0 though.
haha funny word filter virtual cheese pizza
that's it i'm fucking done, i got fucking btfo'd
Holy shit this is hilarious, I think they merged the cutesexytobutts and shexyo datasets lmao
>>22810 Looks like it can do the popular Friends just fine, everyone who trained a Kemono Friends LoRA got btfo'd.
i need more of that spread pussy peek
So what now, do we just wait for someone to kamikaze himself into the NAI server room, steal the new model and code, upload it and hopefully parachute out?
>>22809 >>22810 I've seen nai3 csrb gens that are very close to style. Weird that they look like this for you
>>22815 I could be getting the short end of the stick, judging from the watermarks they really did fuse shexyo and csrb's datasets
>>22816 well someone posted this on cuckchan /g/. think this is much closer to his style
>>22816 tried putting one of them in your negative?
>>22818 Not yet. Only genned a few pics, I've been busy with backing shit up after a catastrophic Windows failure and my games drive won't get recognized and I have no fucking idea how to fix it, working on that rn.
>>22819 good luck genner
Getting error 500 when generating, the fuck?
maybe im stupid but i tried many ways and don't get any metadata.. maybe 8ch is stripping it?
(for the NAI3 pics)
>it gets the blue archive halos right Fucking how though? How is it more accurate than even the most autism-fueled 70 iterations LoRAs?
>>22824 model resolution probably?
>>22825 I'm testing random stuff that used to be problematic and so far the only thing that's still somewhat hit or miss are Flandre's wings. (it also still can't do Holo flawlessly with just the tag somehow - eg it forgets the tail or her color, how the fuck lmao but adding Spice and Wolf fixes it) Really wish this is gonna leak, the expanded character set and massively improved pose consistency would be so nice.
>>22826 Well it leaking would push the model even further, like what happened with the original NAI. Well I won't be paying to use this, I like the control we have with local too much and I kinda think most NAIv3 gens look kinda boring and generic style wise.
>>22827 >I kinda think most NAIv3 gens look kinda boring and generic style wise the ones with style prompts look good wtf? I was actually getting sick to the stomach with this based64 and shitmix style we had that bled through even with thousands of loras. don't even mention furryshit, it's straight up repulsive
(1.94 MB 1152x1152 catbox_4r0i58.png)

>>22812 I guaran-fucking-tee you it still struggles with the vast majority of Friends, especially anyone not from the anime. More parameters doesn't make it omniscient.
>>22828 Nothing has really wowed me yet but style comes down to personal taste anyway. The model is clearly a huge improvement when it comes to anatomy, hands and composition though.
>>22830 >The model is clearly a huge improvement when it comes to anatomy, hands and composition though. This and the vastly improved details due to higher resolution data + more parameters is what PISSES ME OFF because we won't see the model utilized to its full potential unless they release it or it leaks. Fluffyrock already demonstrated that NAI was infantile as an SD 1.5 model, now an actually good SDXL anime model exists but nobody can get their hands on it to do further training, make use of extensions, generate in higher resolutions, etc, etc. I am unironically SEETHING over proprietary AI.
>>22831 Yeah, feels bad man.
>rumor I made up months ago resurfaces >>22831 problem with SDXL is that the architecture is wonky (unet too big, tenc still too small) and there is functionally no real base weights to train from since SAI only released merges there'd need to be another project like zerodiffusion but for SDXL, but that's just going to take forever
>>22822 nai will load the metadata from the alpha channel, it's not otherwise readable >>22833 any ML people know roughly how much it would have cost NAI to train their 3.0 model? is it in the realm of possibility for a community to fund in open source?
just started genning last week. I'm finally starting to get some nice results from lurking here and various other places
>>22838 Cute
>>22838 good stuff also unrelated picrel with totally accurate internal cumshot
>>22835 here's their run schedule https://wandb.ai/novelai looks like probably a month. they've got a h100 cluster they purchased with v1 japbucks, no idea how much of it was dedicated to training or if they're running some of their services on it.
also i have to say i'm becoming increasingly pleased by nai3. i was expecting to quickly run into some irritations like i have with all their other models including text, but it's just a solid performer. it does artist mixing well and even fairly bizarre outputs like #4 still don't lose anatomical sense like bad lora mixes on local often do.
>>22843 oh and also that higher resolution pictures that require analcoin are equivalently higher quality, but because the generation is done in one step, changing to a higher resolution produces a different picture so you can't really tinker around on the free gens and then switch to the priced ones once you find something you like. gotta just upscale.
>>22842 Are they not censoring it?
>>22845 the whole company is coomers, fujos, and waifufags. only censorship they've ever done is try to strip 3d vectors out of their image models for the obvious legal reason.
How exactly does first leak happened?
>>22829 >I guaran-fucking-tee you it still struggles with the vast majority of Friends I guaran-fucking-tee you they're not the ones I jerk off to.
>>22847 guy who did it implied it was an improperly secured github people have since blown this up into a github zero day because that's more exciting you can dig through the /g/ archives if you're bored, beginning of october of last year
Is there a list of artist tags supported by NAI 3? I cant get Marota to work.
>>22850 > improperly secured github Wonder how is that was even possible, since github is not hosting weights itself AFAIK > you can dig through the /g/ archives Alright, I'll try to dig there, was it already /sdg/ or somehow different?
>>22854 i think it was, yeah
>>22852 >zankuro UOOOOOOOOH
>>22847 >>22850 No solid proof other than this post which was the first mention of a 0day and very recent after the leak happened. https://desuarchive.org/g/thread/89032958/#89033139
>>22849 welp i guess it's time to kms myself
>>22854 It was already /sdg/ because SD released a few months before the NAI leak, though I'm pretty sure the prevailing theory behind it was someone who had access leaked it intentionally. If it was an exploit it would have had to have been for both github (where the code is stored), and for huggingface (where the weights are stored). I have certainly heard theories that it was an exploit but it seems exceedingly unlikely a zero-day for multiple git hosting services was only used to leak an anime AI imagegen model.
>>22852 Great list, thanks anon. Crazy that even that list is missing a bunch of artists I'm testing NAI with that it recognizes greatly. One thing it does get wrong though is it shouldn't be including banned artists (those who DMCA'd their work on dambooru). Belko clearly isn't recognized at all for example.
>>22858 It does struggle with Mirai somehow but I didn't really test it that thoroughly, I ran out of credits a few hours ago. I bought a subscription key for a friend last year and he hasn't used it yet, I'm somewhat tempted to use it but nah.
>>22861 That said, they should REALLY update their subscription tiers if they haven't already, IIRC when V2 was new they didn't give free gens on it. With the influx of subscribers they should be able to offer more options for the same price or even a bit more.
I wonder if they're gonna keep updating it or if they'll just reimplement Controlnet and call it a day.
>>22865 hoping we at least get prompt editing
(2.59 MB 1280x1920 catbox_xvkpml.png)

>>22754 Well I guess I've got NAI for a month now
>>22866 Ipadapter would be nice if they're dead set on not allowing people to train/use LoRAs, it'd be more like slapping a bandaid on a bullet wound but at least it'd be better than nothing
(2.68 MB 1280x1920 catbox_4xjrif.png)

(773.58 KB 640x960 catbox_nmi00q.png)

It's funny how NAI seems to either know a character completely or not at all
Do you have to use analcoins or whatever if you want to generate with the other two "normal" resolutions? Seems pretty retarded that they'll let you do 1024x1024 for free but not the other two.
>>22870 no, you can do portrait and landscape too. their phrasing should be reworded to clarify 1024x1024 is referring to megapixels, not the resolution persay.
>>22871 Oh, cool. There's that at least.
>>22867 >dodoco and all the clovers look right Sorcery.
(2.86 MB 1856x1280 catbox_za5tl5.png)

>>22873 Yeah it's amazing how well it knows character details. Even with all my lora efforts over time I could never get the white skirt details and the two "keyholes" would always mess up from time to time. NAI nails them every time. There seems to be a bit of a learning curve with using artist styles where prompt order matters a lot.
>>22874 That's what intrigues me, how did they get such good accuracy nearly? every time? Not even the blue archive ultra-autists managed to do it. Is it just unet size and the higher res? Most LoRAs are trained at 1024x now, is that not enough?
https://files.catbox.moe/9nugyz.safetensors uploading the rare mizumizumi lora for prosperity. taking a break from this stuff
(1.18 MB 832x1216 catbox_fsgb9z.png)

(1.16 MB 832x1216 catbox_gy0nzq.png)

(1.25 MB 1216x832 catbox_bg6pye.png)

(1.28 MB 1216x832 catbox_431exz.png)

Also for anyone else trying out NAI, do yourself a favor and switch the default sampler to DDIM.
>>22877 Why? I feel like regular Euler is giving better results than Euler Ancestral and 2S/2M (I also don't get why they recommend 2M in the documentation but put 2S as the recommended one in the UI) Also nice, was trying to get fizrot to work but it just didn't feel like it.
(1.19 MB 832x1216 catbox_p4b6uh.png)

(1.08 MB 832x1216 catbox_t43v1t.png)

(752.83 KB 832x1216 catbox_pcm0xc.png)

>>22878 It's hard to do side by side comparisons because results will almost always be different but here's the best I could get. Ignoring the wrong details, I almost always prefer the shading/lighting with DDIM and it's just a bit more complex overall. Here's also the Euler version of the fiz rot I posted before, though I should note that it could be argued that it's more accurate because I didn't prompt for a background. I had some better comparisons earlier when I was testing but I already closed it before I thought to save them
(1.03 MB 832x1216 catbox_2ruev9.png)

(1.02 MB 832x1216 catbox_o8s6j1.png)

(945.91 KB 832x1216 catbox_1uul0c.png)

(1.10 MB 832x1216 catbox_boaqj3.png)

After digging through raws for usable manga shots I've finally achieved an AI art holy grail of mine https://mega.nz/folder/dfBQDDBJ#3RLMrU3gZmO6uj167o-YZg/file/kCwSXDgJ
>>22879 Yeah, I see your point. I wonder why they don't recommend it as default.
(986.78 KB 832x1216 catbox_13dvym.png)

(955.48 KB 832x1216 catbox_j1vf33.png)

(1011.04 KB 832x1216 catbox_scuwch.png)

(1005.91 KB 832x1216 catbox_webj04.png)

>>22881 Very nice, the anime did her dirty >>22882 No idea, unless they're hoping people are dumb enough to waste anlas on more steps instead of looking at the other options
>>22883 >No idea, unless they're hoping people are dumb enough to waste anlas on more steps instead of looking at the other options Is a higher step count ever really beneficial on the commonly used samplers? From all my experiments DPM++ only requires like 20 steps and Euler Ancestral 25-30. They did mention that SMEA/SMEA DYN don't work with DDIM, maybe it's that. Also apparently they just added the karras scheduler.
>>22879 you should grab a more modern negative and positive before comparing samplers or else you're just comparing bad vs bad
>>22886 i posted mine so you can have a look. the positive quality tags changed with v3. here's the presets, they get appended to the end of the prompt. one of the devs suggested on discord that the same should be done with your own quality tags. quality: >best quality, amazing quality, very aesthetic, absurdres UC light: >nsfw, lowres, jpeg artifacts, worst quality, watermark, blurry, very displeasing UC heavy: >nsfw, lowres, {bad}, error, fewer, extra, missing, worst quality, jpeg artifacts, bad quality, watermark, unfinished, displeasing, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract] if you don't have nsfw in your positive, the UC presets automatically insert it into your negative. not that it matters lol, any nsfw prompt will overrule that pretty easily.
>>22887 This is good to know, thanks. I'll have to play around with these
>>22888 also i just found out that if you leave your negative blank, "lowres" always gets inserted. if they picked that one it must be useful.
(1.17 MB 1216x832 catbox_6ti3zp.png)

(1.21 MB 1216x832 catbox_ckltva.png)

(1.31 MB 832x1216 catbox_l86s3h.png)

(1.32 MB 832x1216 catbox_peieck.png)

>>22889 I like the quality tags, but I still am not a fan of the undesired content presets. Funnily enough it seems to be because of the lowres tag because even when I add it in manually it ruins the atmosphere of the whole gen. I've repeated it multiple times on multiple different gens with the same results
THE FIGHTING POLYGON TEAM >>22890 good to know. i'm surprised lowres had such an effect, but looking at the tag there's a lot of pictures with a focus on simple, punchy composition.
>>22859 > though I'm pretty sure the prevailing theory behind it was someone who had access leaked it intentionally Yeah, that does seem to be more realistically, than 0day, tbh. Or maybe it was really intentionally, since open sourcing model by themselves would bring some consequences or smth? >>22857 That was funny to recall though, this one and the previous threads, unpickled time b4 safetensors was a thing.
>>22893 i seriously doubt either of those theories. especially at the time nai had a small tight-knit team and nobody's left in the interlude, which you expect a leaker would do. the only disgruntled "former employee" i'm aware of was a discord janny for about three days before getting booted and has been a retard schizo about it ever since. the leak completely took the wind out of their sails. they had the world's leading (public) anime model for like a week, just getting traction and attention around the world, then it got hacked and they got to lock down completely and do security audits for months while the free scene built entirely on their software surpassed them. you used to be able to see the hole on their wandb page, a month and a half of nothing rather than pushing their advantage. anyone who has had, or accurately fantasized that they had, that sort of field-leading moment can imagine how bad that hurt. they're still recovering. not that it stopped me from using it, of course.
Any way to view NAI prompts with TweakPNG?
>>22896 Probably not, as the metadata are in the image's alpha layer. If you load the image in photoshop and use levels on the alpha to actually make the transparency noticeable, you can see the left side of the image pixelated. There was an addon for the webui anons used for a while that saved your metadata into the alpha too, it also let you read it in pnginfo, but I don't know if it still works or if it's in the same format. It was called something like stealth pnginfo iirc.
>>22842 >>22843 >>22852 learning from these prompts thanks >>22892 i'd play this game
infinite rustle, fuck me. good tip from another anon. {{artist:rustle}} works way better than just rustle. not sure of a better way to prompt this, but the fact this worked without loras is crazy
(1.33 MB 832x1216 catbox_du6a8k.png)

Oh yeah forgot the most obvious test for nai 3
(3.01 MB 1920x1280 catbox_kordgh.png)

(2.76 MB 1856x1280 catbox_39uspu.png)

(1.52 MB 832x1216 catbox_v6ax0o.png)

(1.40 MB 1216x832 catbox_djasbw.png)

>>22900 fuckin whew man
>>22896 >>22897 It stores metadata in both infotext and alpha channel. They literally copied the stealth pnginfo implementation because the checksum is identical. You can also view it in TweakPNG assuming the infotext portion wasn't wiped.
>nai won It's over
>>22902 it's over
>>22902 >>22903 >it gets the eyes right AAAIIIIIIIIIIIEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE NAI-SAMA I KNEEL
V3 is probably about half a year ahead of local. Maybe less.
>>22902 How did you estimate that exactly?
(1.13 MB 1280x896 00061-3666007750.png)

>>22911 rape
(1.00 MB 832x1216 catbox_3ov2v4.png)

(920.24 KB 832x1216 catbox_pad8cl.png)

(970.20 KB 832x1216 catbox_z57cj9.png)

>>22914 Cute Ioris
Does NAIv3 recognizes lilandy or opossumachine?
>>22916 opossummachine yeah, lilandy falls short
(1.33 MB 1216x832 catbox_49dusd.png)

Well fuck
>>22914 >>22915 fuck me, that's the good shit
>>22919 My lora vs NAI which only needs one token to get the whole character better than I ever could with >loli, fang, wristwatch, dress shirt, collared shirt, pink shirt, bare shoulders, open clothes, off-shoulder, short sleeves, partially unzipped, blue jacket, drawstring, striped socks, shoes, power symbol, electric plug, tail, headphones, lightning bolt symbol, hair between eyes, sidelocks, long hair, white hair, star-shaped pupils, aqua eyes, aqua nails
>>22909 I'm pretty sure a model trained the way fluffyrock was could achieve the same effect, and without the baggage of SDXL. So far I haven't seen anything that couldn't be achieved with a properly modern non-XL checkpoint
>>22922 not holding my breath for that
(1.40 MB 832x1216 catbox_jotcy2.png)

(1.41 MB 832x1216 catbox_6nsuat.png)

(2.33 MB 1280x1920 00263-2755878683.png)

(2.28 MB 1280x1920 00261-2755878683.png)

>>22921 Last thing but found it interesting that black collar actually correctly fixes the shirt collar instead of giving her a dog collar like the tag technically should and does on original NAI
>>22922 >So far I haven't seen anything that couldn't be achieved with a properly modern non-XL checkpoint Ok, forget about it being sdxl for a moment, what about character accuracy?
>>22926 What about it? If you're asking me if fluffyrock has built-in characters, yes it does
>>22927 But are they accurate? Quality is better than quantity, what >>22925 said is exactly why we're behind. We can have a LoRA trained by giga-autists and still need nearly half of the token "limit" or more just to get semi-decent results.
(470.63 KB 640x960 00277-2755878683.png)

>>22927 And you can prompt Miku on original NAI and get Miku, that doesn't make it as good >>22926 The character accuracy is definitely not matched by current furry models. Digitan (this character >>22925) currently has 145 tagged images on danbooru, and the artist of the character has 638. I still have to prompt for a couple extra details NAI gets it nearly perfect. For comparison, Alvin from Alvin and the Chipmunks has a current tag count of 505. Keeping in mind the total post count is 3,853,671 on e621 and 6,864,426 on danbooru, even still I can't get an accurate gen of the character at all.
>>22928 >>22929 Sounds like a training thing rather than anything inherent in SDXL or any secret sauce. Tag count isn't data count. Just because there are tons of images with x character doesn't mean those images were actually used in the dataset. With loras, tags can just be pruned if you want to reduce the amount of tokens "needed", I don't like doing that myself for my own loras. My point was this can all be done now without SDXL and the people behind NAI aren't geniuses.
>>22929 Can you please bring your furfaggot shit and your malding elsewhere?
>>22930 >My point was this can all be done now without SDXL and the people behind NAI aren't geniuses. So why hasn't it been done yet?
>>22932 because this subcommunity (anime ai artfags) is filled with poorfags and people unwilling to coordinate, (and the ones that are, WD, pivoted to fucking LLMs while still supposedly working on the fabled WDXL with no real progress report)
>>22933 t. furfaggot
>>22931 NAI 3 is better than the furry models, why would I be malding about that apart from it not being publicly available? I don't know shit about training models, I just know what can be done with what's currently available and compare them.
>>22935 >I don't know shit about training models, I just know what can be done with what's currently available and compare them. >I don't know shit about training model so let me tell you why this is nothing big
>>22933 >because this subcommunity (anime ai artfags) is filled with poorfags mask off silence furfaggot, you use jewgle gibs
Who started the rumor that NAI might release the 3.0 model in a few months?
>>22933 Furnigger models are only useful for inpainting dicks (because furries are dick loving faggots) and I'm tired of hearing about their filth
>>22938 nobody worth paying attention to
>>22939 >blaming the furries on your dick obsession
>>22941 >furfag site >penis tag 1332K results >danbooru >penis tag 330k results Furfaggots are obsessed with dicks
>>22937 >>22934 You read into that way wrong, turn off the 4chan tribalnigger part of your brain. Literally just look around you, no one is working on getting a good, new model working and now this thread has unironic novelai shilling. I don't give a shit what you think about furries or where they got their money, the fact is that they made a good model and virtually no one is standing up to do the same for anime.
>>22943 >muh tribalism!!!1!1!1!! Sorry furfaggot, I don't mix boards and communities just like I don't mix races.
>>22944 I don't know if you're just derailing because I interrupted your shilling or you're actually room temperature IQ
>>22943 No one is shilling NAI, if testing out new shit and going "well shit, it's good" = shilling you need to eat the barrel end of a gun, there's no resetting your brain at this point. >AT LEAST THE FURFAGGOTS ARE DOING SOMETHING, WHAT ARE *YOU* DOING ANIMETARD? Laughing at you for starters.
>>22946 Who are you quoting?
>>22945 nigger you've been shilling furry garbage for months
>>22948 Literally the first time I've bothered mentioning fluffyrock in weeks because I don't use it Go to /sdg/ or /hdg/ if you want to blindly rage about something
>>22909 that's about right waifusion late epochs (E20 or so) are programmed to be june next year
>>22948 too bad, that's me
>>22950 Ok that's it, I'm resubbing. I'm so fucking tired of shit hands on local.
>>22953 Seriously? Hands?
>>22954 Seriously? Hands?
it's fucking over
are we really that lazy that we dont want to use inpainting/controlnet to fix anaatomy or you guys just want to get it right by just prompting?
are we really that lazy that we dont want to go to art school and learn drawing ourselves or you guys just want to get it right by just using soulless ai?
>>22958 you're comparing years of learning to maybe not get good enough to wasting 5 minutes on fixing something you generated to loook better.
>>22957 >you guys just want to get it right by just prompting? That's the end goal of pure t2i, yes. It's also way more fun.
>>22960 Disclaimer: I have a gambling addiction.
>>22961 nah i get you its just leaked nai was so bad at it and im so used to using a bazillion lora's, inpainting,etc that i forgot how fun prompting was
>>22962 Create a "to inpaint later" folder, stick shit in it and make peace with yourself (you're never gonna inpaint them lmao) and just have fun prompting, it's what I do
>>22963 I really do enjoy fixing "rough diamonds" with image editing and inpainting but local is so far behind now. Well less time genning and more time welding is fine by me.
not SD related but we're eating good AI bros, SV Teto will be joined by SV GUMI next month.
>>22951 >WD I cringed after reading the 1.5b3 notes >arbitrary date ranges >aesthetic and quality gradients >extra tags >REAL LIFE >INSTAGRAM shit is fucked, they poisoned the well for no good reason
>>22951 >>22969 Caaaaan we fix it? NO, IT'S FUCKED!
>>22969 open source ai projects and abject retardation seem to go hand in hand
>>22972 I get the dates thing and I sort of get the idea behind the two gradients even if anyone would realize that they're terrible ideas after thinking about them for more than two seconds But why the fuck would you literally poison the model with real life pics? Not only that but the fact that they inserted the extra tags like "waifu, real life, instagram" etc shows that someone must thought people would want to filter that shit out so what's the point? Just download the latest chink/gookmix from civitai and crawl to your tranny gooning discord server instead of poisoning the well for everyone else. Sheer fucking mental retardation.
footchads are you getting anything good out of nu-nai?
>>22969 >>22971 >training on realism and anime with Booru sites that are probably mistagged Wow they really screwed the pooch on this one
>>22979 still not seeing best girl
>>22980 My bad, here you go. Surely I'm not missing anyone important
>>22981 cheeky but still no
>>22983 FINALLY
This shit's so good, I really hope local can catch up even if it's another miraculous leak. Aside from some small vtubers and such like Nanahira it's been able to do everything I've thrown at it
>>22969 waifusion isn't waifu diffusion
oh lord it can do kentarou decently
Can NAI v3 do sen_(astronomy) or Testa?
>>22991 The Rope it is, then. Revive me when this model leaks.
>>22966 >>22967 oh that's good to know. i tried NAI3 but it doesn't seem like it recognizes teto's SV outfit...
>>22993 (those are just SD1.5 btw)
>>22993 kasane teto (sv) is what you want unfortunately there probably wasn't enough data at the time so it's not perfect but it's not entirely gacha either from what I've seen
>>22988 >waifusion that's even worse then, waifusion returns NFT shit
>>22996 lmao, jordach really chose the worst possible name for that model
Can any of the anons that coped naiv3 try to do guns or weapons in general swords,etc to see if they still look fucked up? Also maybe try one prompt with a male to see if it can only do twinks
>>22989 Jesus Christ
>>22995 oh got it thank you! economically i don't think it's *that* bad a deal for a yearly subscription if their highest tier gives unlimited medium gens (personally), but i'm missing all the local stuff that comfyui and the like provide my only UX complaint so far is accidentally hitting the back button on my mouse and losing like 20 gens, why can't they just autodownload for me aaaaa
>>23000 TETOFEET UOOOOH I haven't used the desktop version in a while but I use Edge mobile and it warns you if you go back/forward while on the image gen url. It's not that big of a hit economically, it costs a bit more than another subscription I have but if you don't have a GPU for whatever reason (ie stuck on an old laptop) or if the electricity company is jewing you I think it might cost less overall. Plus you can just share it with a gpulet friend.
>>23000 >>23001 With that said, what I miss over local is the ability to mix styles exactly how I want through LoRAs. They're reimplementing controlnet for V3 in the future but user LoRAs will always be a dream. Ipadapter could alleviate it slightly but you know how that goes.
>>22950 >lotta gay shit up there i'm not gonna read wise also, nice gens >>22965 >>22972 >>22989 hot
>>23004 the gookslayer is so cute
>>23000 is the first one inpainted?
>>23007 not even inpainted, just edited in krita slightly, here is the original
new invoke is out for the nodefags https://youtu.be/QUXiRfHYRFg?feature=shared
>>23004 well fuck it's so far ahead it's not even funny
>>23012 Video-game screenshots and pajeets getting forced to use toilets aside is Dall-E 3 even that good for the sheer amount of computing power it requires? I know it's nearly useless for anime (if you can even get it to generate anime)
>>23013 It's great for quick concepts or as reference for compositions with img2img/controlnet. You can just describe a situation in 1 or 2 sentences and it will usually just get it right. For example I tried NAI's img2img (and it makes me appreciate controlnet so much more), here's the reference dall-e gen https://files.catbox.moe/4bo3o7.jpg
>>22993 very cute
in case anyone cares, I updated the hydrus importer script to support NAIv3 metadata (PNG info and the new alpha channel format) https://github.com/space-nuko/sd-webui-utilities funny thing, it seems the NAI people took the original stealth metadata extension for webui and reused it for their UI
Damn huge thanks to the anon who gave out the NAI key, just when I was sure I wouldn't be able to play around with it due to some bullshit.
>>23017 Someone gifted you a key? Or is there some super secret TSA master key?
>>23018 Anon just posted some keys on 4cuck hdg right when I was browsing.
>>23019 *gift keys Not some epic sikrit keys, no
>>23014 That's not too bad but I guess you'll have more luck when they reimplement Controlnet in V3
does ipadapter actually work to make quick and dirty pseudo-loras or is it a meme?
>>23016 Nice! By the way, how does the retag argument work exactly? How does it know which files to retag? I was thinking about using it to tag files I already have imported, so I don't have to make a copy of everything in some folder just to import the tags.
>>23023 um apologies but might not want to use that command right now, it's pretty old and outdated what I was meaning to do with that was to add some new metadata that I handled in newer versions of the script but the existing images in my db lacked (example: notes metadata that embed the original prompt/parameters) but it was written a long time ago back when only webui prompts were supported so I haven't used it recently the criteria it uses to retag is: if the autogenerated tags returned by the importer differ from the current tags (to account for bugs/changes in the importer's tag parsing), or are certain notes attached to the image missing (currently only the raw A1111 prompt text) the problem is there was no distinction for which tags belonged to which service (personal or from the SD prompts) so I had to retag everything by wiping and readding all the tags for each image except the ones I knew I personally used and those are hardcoded for my usecase only... I think to solve this I need to learn how to use multiple tag services so I can put my personal tags separate from the SD tags and then 'retag' would only replace the tags belonging to the SD service so if you don't use *any* tags you add yourself after the import, it should be fine - but only *after* I fix all the bugs first...
If the /hdg/ userscript anon is still here, do you think you could add support for NAI3 metadata, kindly? I think 4cuck has it added already
>>23024 Ah I see. I already use multiple tag services to separate things, so the sd service getting wiped isn't really a problem for me. But I guess if I decided to update the tags, it would be better to just wipe them manually, export the files into a folder and running the script on them again. Anyway, anything new when it comes to the webui parser? Just to know if there's even a reason to retag.
>>23026 i think i fixed a few bugs related to token weight parsing a few months ago, and there's the newer ComfyUI and NAIv3 parsers for those images. a while ago I also added a tag indicating which frontend was used (`prompt_type:comfyui`, etc.). and also with hydrus' somewhat recent notes feature, added the original image filename and prompt parameters/positive & negative prompts, but I need to add some of those to `retag` first.
Is everyone using the default quality/UC presets on V3 or have people already found something better?
>>23028 My current preference is enable quality tags, disable UC presets. Manually add "worst quality, low quality" to negs and add anything you don't want if it is shwing up in your gens like text/artist name/signature, etc. DDIM sampler for default res gens; for higher res enhance 1.5x with Strength 0.32 and sampler set to DPM++ 2M exponential, SMEA enabled, You could also just stick with DPM++ 2M exponential in general, I found it's a little better than Euler but I still prefer DDIM even with the light splotching you get sometimes.
>>23028 i'm using >very aesthetic, {{amazing quality, great quality}} >anime screencap, ribbon, backpack, {{worst quality, bad quality, censored}}, amputee, deformed, fewer digits, bad perspective, bad anatomy, {{{signature, watermark}}}, unfinished
>>23026 >>23027 okay reworked the retag thing in the latest commit, i'm testing it out on my own db and it seems to work but backup your db before running it. also when running the command you should supply a tag query to apply the retag to, it should use the hydrus api format
Is there a way to determine when the V3 dataset was put together?
>>23032 the console tells how many pieces the tag circle actually represents, someone put it at late september this year
>>23034 this shit is gonna be really bad for me man
>>22976 eiji (eiji) is a bit rough but it works
>>23029 Any reason to switch schedulers? Is 2M Karras not preferred anymore? Still testing samplers at "normal" gens, the light splotching you get from DDIM is pretty annoying
could one write userscripts for NAI's frontend? already have a few ideas but i'm afraid of being banned
>>23039 To do what?
>>23040 download each image as soon as it's generated so i don't lose it later, usually i just download the zip everytime i come across something good and unzip it over a dir was thinking about automating the gen button but that seems against ToS
>>23037 2M Karras is fucked on NAI v3 but 2M Exponential is the recommended and essentially the same thing. I just prefer the contrast DDIM tends to have but 2M with SMEA enabled is probably better overall.
>>23042 I tried enabling SMEA and SMEA DYN manually and I swear the results are fucked compared to just leaving it on auto on "normal" resolutions.
>>23041 also wildcards would be really nice
>>23044 DYN doesn't do much from what I've tried
>>23041 >>23045 also another thing I thought of, a way to notify you when a certain threshold of anlas has been spent per day, so you can spread them out better throughout the week/month
Has anyone worked on a stand-alone front-end for NAI or even just a user stylesheet? I know it sounds stupid but I think the overall screen space could be used better.
>>22705 How did you convert the image to scribble?
>>23049 i played with doing one for their text service and just came away hating css
>>23052 toraishi 666
>>23031 Any plans on adding jpg support, or would that be too complex? Anyway, I made a catbox downloader in hydrus and it seems already pretty good. I used your archive script before, but I wanted to import the images straight into hydrus without having to keep an extra copy of the files around. I also wanted to get thread and post urls, which are now saved with the image and you can click on them in the image viewer to jump straight to the post. It also grabs some extra catboxes that the script missed, like links surrounded by text including multiple links on a single line. And it doesn't look like it misses anything now, after a bunch of iterations. I still need to add support for 8chan though, it can only do foolfuuka archives right now. Though it has a downside in that you have to provide thread links yourself. I could post it later if anyone is interested.
Something I didn't expect is that if you keep the seed and prompt the same but add an artist at the end it'll actually keep the composition (mostly) the same. It's really fucking refreshing.
>>23057 Wasn't that always the case? Adding or removing tags at the end of the prompt always seemed to have the least impact on composition.
>>23058 Yeah but have you ever been able to do it with artists locally?
>>23059 Does local even support artist tags?
>>23060 No, that's the point. You'd have to fine-tune a model for that but that's out of most people's reach, hence the use of LoRAs. I've never had any luck with keeping the same composition when using/mixing styles. I'm sure you can use img2img/controlnet with the original image but that's fucking lame and not a proper solution.
>>23061 Ah ok. Yeah, loras change the composition, because it modifies the unet or something, while tokens are just tokens.
>>23059 laion artists yes
>>23063 I don't think any of us care about greg rutkowski and van gogh.
>>23062 >Yeah, loras change the composition, because it modifies the unet or something Yep, that's the issue. In the end nothing beats a well-trained model. Cryptofags who still haven't sold their farms should just repurpose them for AI training and let people rent them at reasonable prices.
>>23064 Give me the Gogh lolis!
>>23065 Hell, there MUST be some chink with like 100 of those Moore Threads GPU just sitting idle. They're shit for gaming since they were meant for AI so SURELY you could train shit on them, no?
>>23064 Some artists aren't that bad for backgrounds
did v3 kill the traditional media tag? sketch still works but the standard trad media seems to do jack and shit also i might be a brainlet cause i can't find a way to have less known characters keep their default outfit on, is there a tag for it? when it comes to characters with more data it's more or less overfit unless you change it manually
>>23068 Yeah, true. I really like using a pointillism LoRA on local, it generates really nice backgrounds and really good skin detail.
would you cum in an old man's ass
>>23071 if the old man looked like that
Any tricks to making huge/puffy nipples on V3?
COME HERE YOU LITTLE SLUT, CLEAR MIND JUST STARTED PLAYING
>>23059 furries have done that for a while, yeah we "just" need someone with a 3090+ and lotsa storage to make waifusion happen faster, since that guy has his 3090 busy on fluffusion R3 the dataset is being taken care of and there are very early epoch prototypes, but the ETA for the final model is june 2024 since he has to train 1-2 models beforehand
>>23075 I don't give a shit about the furries. Bring that cancer elsewhere.
>>23076 >I don't want solutions, I want to be mad yeah we can see that
>>23077 >take the furry cock goy it's the only way No thanks. I want nothing to do with those creatures.
>>23078 >model initialized on a non-furry base >dataset is scraped only from danbooru the only thing "furry" in the equation is the guy making it and if someone else takes care of training like I'm suggesting then there is basically no link
oh lord he's multi-thread fagging
We should ask the furries their Workflow so we can mimic to make our own model
>>23080 always have been I don't have the hardware so the best I can do is shill until people dedicated enough show up to do so. all the excuses for no new finetune projects have been >no money for compute >no expertise in finetuning >no good dataset and here we have a solution that alleviates all of these Most of the tech applied there has been figured out, we are literally just a few months of training away from a new local base model much closer to naiv3 than the current local naiv1. Tell me, why wouldn't I shill for this type of project? Because it might hurt NovelAI's current subscription boom? Because a furry will get credited? >>23081 that is what I'm proposing, since an anime model (and I reiterate, fully anime dataset) is planned, the compute is just not available for a while
>>23082 >No money for computing We just need ONE guy with a 3090 or 4090 that's all, and there's plenty of those with one already, for example in civitai discord >No expertise in finetuning I'm pretty sure that there's at least some crazy scientist on 4cuck /hdg/ >no good dataset In 4cuck /hdg/ again there's plenty of dudes saving at least +100k of images with tags, there's one crazy madlad with 2 million I think
>>23082 i meant the guy who read a post about furry models and it clanked around in his gay empty head until goyposting fell out. all it takes for local to improve is a lot of datasetting and a few thousand dollars worth of gpus. we've known this since the start. i'm just waiting for someone to volunteer and mean it and not turn out to be a flake or crypto retard.
>>23083 >We just need ONE guy with a 3090 or 4090 that's all You dumb fucking nigger do you seriously think some guy will keep his 3/4090 working for days and days? Are you gonna pay his electricity bill? Do you have any idea how long it's gonna take? Not a LoRA, not a fine-tune but a MODEL. On the same scope as V3? Fucking delusional.
>>23085 I would help to pay the electricista bill if he opens a patreon or whatever, I don't mind.
>>23085 Much less expensive than paying for compute and data transfer/storage for a cloud platform. With a lucky card where you can change the clock and the voltage to optimal values for fast but cheap training it's even better. As >>23086 said just open a patreon for these expenses A few months ago he said that his 310w 3090ti costs him 82 bongdollars to run a month, and that's without the more recent optimizations he used. He ran the numbers for a cloud A100 and even if it's faster, it almost costs as much per epoch as a local full run.
>>23087 Man i'm pretty convinced we could actually do this lol, with 50 guys supporting, providing dataset and helping a bit with money, hopefully we could release the first prototype that's not full of shit
>>23087 Ridiculous. They're just delusions of grandeur until people put money in. It's been over a year at this point and none of the projects have anything concrete to show for it. So much talk about organizing and setting up all the supporting infrastructure but nothing concrete.
>>23089 Case in point: >>23088
>>23089 >none of the projects have anything concrete to show for it on the anime side only
>>23091 Oh please do go on about the advancements in furry hyper vore.
>>23092 >full finetunes aren't real because I don't like their focus lmao
>>23093 >unironically defending furshit because one day in six(teen) months they might help advance anime models
>>23095 not my problem The only thing I care about is having a good local model to coom.
>>23096 It will literally never happen. It might in some parallel universe but you still won't catch up anywhere near as much as you're coping.
>>23097 >happened for furries >swapping in an anime dataset will magically make it never work ever tell me who's coping again
>>23017 Oh I didn't know you got one of those, that makes me happy. For reference I also shared that youmu-kun LoRA so this isn't our first encounter. :^)
>>23098 LMFAO nigger keep coping, you couldn't even find a single (1) guy with a 3/4090 and you're already thinking about the finished product and when it's gonna happen. The last few posts have been nothing but "but what if" and the biggest one is fucking "what if we find a single guy with s 3090"LMFAO Yet I'm the one coping? Sure fucking thing bro, two more weeks until we catch up to NAI V3.
>>23100 >you couldn't even find a single (1) guy with a 3/4090 Keep up, schizo, it's about finding another one so that it gets trained earlier. It will be trained regardless of your delusions.
>>23101 We'll be pensioners by the time it's done and it would still be behind.
>>23102 >nooo june next year is too far It's been clear for the past few posts you're an underage zoomie. Can't think about the future, needs immediate results, has no sense of money (could just be a third worlder though), brand loyalty and extreme reactions. Get a job and go touch grass.
>>23103 >t-two weeks until we get our own NAI V3 Lol >you're a poor jobless zoomed third worlder I can afford paying 25 bucks a month for NAI. You can't afford a used 3090 to train.
>>23073 just train a lora
>>23105 this kills the NAIbro
sorry shill I will not pay your subscription
i have *a* 3090 but surely trying to finetune a brand new model on a single consumer card for months trying to get a quality result is...? i remember even with the NAI leak, there were some earlier prototype models in the torrent that were nothing even near NAIv1 quality, presumably failed training runs. imagine it costs like $10k or 2-3 months to finetune one of those only to find out you fucked up the parameters/dataset what is the current state of model finetuning knowledge anyway, is there a rentry or something one could look at to understand the current knowledge? how much extra software would it require (finetuner, tag classifer, quality tagger, etc) that NAI has probably written for themselves? what would the dataset tags look like if raw booru tags are too low quality? could you finetune a cheaper test model on a subset of like 1000 artists with cleaned up tags to see how good the results are before sending like 3 million images to a GPU cluster or is that naive?
>>23105 >>23106 sorry faggots I figured it out and I'm not telling you :D
>>23108 >or is that naive? This entire endeavor is naive. Rome wasn't built in a day but they planned and funded everything before they started building it. This is more like a kid dreaming of getting a hypercar just by selling homemade lemonade during the summer - except he doesn't even have any lemons yet. He doesn't even know where he's gonna get them from but he's already acting as if he owns that car.
>>23108 >is there a rentry or something one could look at to understand the current knowledge? if you actually want to try I would start by contacting lodestone (fr creator), he is super autistic and loves sharing knowledge on finetuning but be warned he will go deep into the technical aspects
>>23108 >is there a rentry or something one could look at to understand the current knowledge Unfortunately no. You'd have to ask lodestones (fluffyrock) and jordach (fluffusion and waifusion) for in depth details, or catch up on months of discord chatlogs. They use personally modified training setups (because TPU for FR and just a lot of not/badly implemented shit on khoya for FR and FF) that include a lot of nifty tech and improvements like vpred, zsnr, min-snr-gamma or debias, IP noise, EMA weights, etc Jordach is the guy training on a 3090ti with a modified StableTuner install. There is a prototype waifusion model to tune the hyperparams for the full run next year. There's also drhead who made the zerodiffusion model, a new general purpose base model (and inpainting model) with all that tech included. Also trains locally. Just mute and hide the furry porn channels and keep tech talk and the threads on custom models and tuning projects about these three models.
>>23112 More the details, the better
>>23111 >>23112 i checked their roadmap and it seems like he's aiming for an "HD" waifusion in september of next year at the earliest, which might not necessarily be an SDXL finetune (as of now)? and it looks like inbetween he will attempt another 768px waifusion in march? sounds like a lot of catchup if naixl is the one to beat. also the fact that he polls for two different training types "better compatibilty" and "better detail" kinda bothers me. i guess because NAI didn't need to complicate things with two model choices, there's just one state of the art model for anime with good enough parameter choices. but at least he's releasing epochs and sticking to a schedule, he's clearly doing some good where nearly nobody else is. why does he have to be limited to a single 3090Ti? he mentions a lot of times that higher resolutions and more data slows down the training too much. this is the kind of project that would be nice to fund if it produces decent results, given that he's committed to training for a whole year's time anyway. anyone aiming for a NAIXL killer that releases checkpoints early and often deserves some funds in my opinion.
>>23114 >not necessarily be an SDXL finetune Everything is SD1.5, now using zerodiffusion as base weights. There have been tries by lodestone which led to TPU sharding, but it wasn't nearly promising enough to warrant switching over. SDXL itself isn't a well thought-out architecture, it was known for a while the unet wasn't the limiting feature, but the text encoder (and also the VAE a little bit) is the main one, and little improvements were done there. Using a LLM as a text encoder is considered but it's not a priority and it's hard to find a working way. >HD versions the HD models are for higher training res, which is mostly for large res upscaling since the base model does fine for usual SD resolutions. >two different training types "better compatibilty" and "better detail" It's just a rough way of explaining the effects of vpred zsnr, since using a different prediction type leads to better results but makes existing add-ons for eps-pred (loras, etc..) less effective or broken. If it's meant as a new standard anime model, then that's not an issue, people will eventually retrain their stuff (which is a good thing since early lora suck ass, holy fuck the abmayo lora is bad) >if naixl is the one to beat it was planned and started prototyping way before naiv3, so it wasn't made it with that in mind
(1.63 MB 1088x1600 00911-3157256280-red eyes.png)

So a friend had a gift key and gave it to me, so I guess I'm giving naiv3 a go. Just a lazy img2img upscale with a even lazier inpaint for testing purposes.
If I had money I would fund that furfaggot I need to get a job for my waifu
>>23117 based for now, just feed the collection bot with tags you want, I've added my waifu(s) and artists I like along with obscure tags I wouldn't publicly disclose
>>23118 happy thanksgiving you disgusting retards hope you get to spend some time with people you love and don't just spend all day generating porn in a cold basement i get turkey and pie and yuffie gets my cum lol
>archiving /vt/ catboxes >around 200 images per thread >it's all garbage
On 4cuck I asked if people would support a new model from scratch or finetune that isn't WD, their answer?: >Show me a small test finetune so I can know if they know what they're doing, because WD didn't know what they were doing at all when finetuning. Reasonable >If the project gets proper recognition, has a good proof of concept, and is vetted on github, I would contribute GPU cycles yes. If it's some random fucked up installer with no vetting or it just wants to be a better 1.5 model then I won't bother, composition is the future and I would only contribute to an open source model that aims to take a step towards that. >>contribute GPU cycles? Training over the internet is a meme. They'll need to rent centralized compute or they're dead in the water. >>> a single 3090 is enough, training will take at least a month, but it will be much cheaper overall than renting cloud compute, since you'll be paying the hourly rate (more than just the electricity bill) + storage fees + bandwidth fees
>>23119 I don't even celebrate that holiday because I don't live in murica, but thanks anyways retard
>>23056 >jpg support what are the metadata formats for jpg again? would you have example files for whatever UI it is?
>>23123 It's EXIF UserComment. Here are some that have it >>22654
>>23099 I guessed correct then. Thanks once again, this thing is really refreshing.
sdxl bad then because uh civitai pajeets, sdxl very bad now because nai uses it
you forgot the meme arrow therefore you say this and look like this
>>23127 what about lilandy (andava)?
>>23131 Find it uncanny, actually.
>>23126 >Pity so many good artists are banned/restricted from being posted on Danbooru. Yeah this is the reason why LORAs won't go obsolete, theres a small amount of artists who's art isnt on there
>>23118 wait it's gelbooru, not danbooru, nice
>>23039 >>23041 Of course you can write userscripts for it. I made one to add shortcuts for attention editing here: https://gist.github.com/catboxanon/9c3003f19bfb3b306d3e47bdd6b68ca7 Downloading each image after it's generated would be a nice one to have, I can look into adding that. I don't see how automating the gen button would be against ToS either when that's what most people are doing anyway. >>23045 >>23048 Don't care to implement these though.
>>23121 Start training from fluffyrock vpred terminal snr. It's the best open lewd model at the moment but it only knows e621.
>>23133 Or artists who are in the dataset but annoyingly just below a threshold to be represented properly (Apostle).
>>23136 it's probably better to start from zerodiffusion since it was made as base weights with vpred zsnr fluffyrock is starting to be pretty fried, and EMA weights only help for inference, not training
>>23138 Fine, FR unet with ZD text encoder. That's probably the best base to use. It will drastically reduce the amount of training required and has already been trained up to 1024 resolution.
>>23139 I'd be cautious mixing ZD and FR since the closest relative is SD1.5. Maybe FF? but only R3 is based on ZD and it's currently training...
>>23071 Catbox pls I've been dying to be able to promp plump pussy like that
now this this is better than podracing
>>23041 Updated the userscript I mentioned in >>23135 to support auto-saving and generating forever. I might implement wildcards but I don't want to complicate prompt syntax much.
uooh sussy
>>23069 genuinely can't get traditional media to work and sketch always looks like a digital sketch on V3
>>23126 >>23133 the hitomi.la guy had a mirror of some parts of danbooru on nozomi.la like here is negom for example https://nozomi.la/search.html?q=negom#1 so this seems to predate the original wave of artist bans when NAI first leaked weren't some of the banned artists in NAI3 or not
>>23143 thanks, i tried it on chrome but the autosaving wouldn't work until I change the line like this const saveButton = Array.from( document.querySelectorAll('div[data-projection-id] button')) .filter(button => getComputedStyle(button.firstChild)["-webkit-mask-image"].includes('/save.') cos the maskImage property is named different in webkit
>>23147 also autosave only saves the first image if you gen at batch size >= 2 so i think you have to click all the save buttons instead
is inpainting free with a sub or do you have to pay in anals like with upscaling/variations/enhance?
>>23143 Is it possible to implement a warning when you accidentally try to leave the page? It's on the mobile version but not on the desktop one from what I've seen
>>23143 Implemented wildcards for this now, check the changelog for syntax. >>23147 I implemented something that should fix that now. >>23148 Should be fixed now too. >>23150 I get the warning in my browser. I'd want to replicate that issue before I bother to implement it.
>>23151 i think autosaving still doesn't work, it searches for another css key named "-webkit-mask-box-image" instead something like this instead could work? Array.from( document.querySelectorAll('div[data-projection-id] button')) .map(button => { const computedStyles = getComputedStyle(button.firstChild); const maskImage = Array.from(computedStyles).filter((css) => (css.includes('maskImage') || css.includes('mask-image')))?.[0]; return (maskImage && computedStyles[maskImage]); });
>>23152 Yeah good point, pushed another fix now.
>>23151 >I get the warning in my browser. I'd want to replicate that issue before I bother to implement it. nevermind i get it on desktop as well after generating at least 1 image
>>23152 is the menu with ctrl-alt-x implemented? doesn't work for me but i'm on a mac
>>23155 (the main feature does work though its cmd instead of ctrl)
>>23153 thank you this is a good start so far the wildcards i was thinking some way of managing lists in the settings menu and recalling them with wildcards eventually for the generate forever feature i have heard of people being banned on NAI for botting so maybe it's a good idea to make the timeout random with a not so obvious distribution and limit the number of images generated to like 3-5 per run or something >>23155 i'm on windows and it's ctrl+alt+x for me
>>23157 sry i meant like __wildcards__ the webui way
>>23153 i think if you try to enhance an image instead of generate it doesn't autodownload
>>23159 You're right, looks like they do some DOM shit out of order with that. I just added a delay to fix it. Posting what I auto-saved so my 20 anal points aren't totally wasted. >>23157 I'll consider making wildcards more flexible like that.
>>23162 we finally know what youkai extermination actually looks like (rape)
>>23163 >>23162 Oh I forgot to link to the full sequence with the best images for each character. In total this took around an hour to make. Once I got rolling it was really fast to swap characters. Obviously, with actually spending analcoin and inpainting it could be even better. https://www.pixiv.net/en/artworks/113690586
>>23164 Isn't inpainting free as long as you're within 1024x?
>>23166 Last one. The engine really struggled with meiling composition. Also where did aya come from? >>23167 Yes, but NAIv3 inpainting is so incredibly shitty and bare bones in terms of options (no option for inpaint only masked!) that it would be faster to take it to local and inpaint there, but that is way more time than I wanted to spend on this. The whole point for me was to see how easily I could swap characters. I also thought it'd be fun to do a small semi story-driven sequence. If people on pixiv like it I might do other games and put some more effort in now that I have the basic workflow down. Not just more touhous, but any game that has an unreasonably large amount of danbooru art, like Helltaker. It's a shame that there are mega popular characters from western IPs that I like that have like 0 danbooru art though. Guess I'll stick to local for that.
>>23168 >Also where did aya come from? always ready to get the scoop (sloppy seconds)
>>23169 True, it's free real estate.
Come to think, would it have been better to have each character in a solo intro panel/fighting stance before each nsfw image? Kind of give it an instant loss vibe? It would also be a good chance to characters to show off their differing personalities.
>>23171 It's done. If I do it again, I'll make sure to have the underwear roughly match between the before and after pics though. Oh well. When you notice something like someone's underwear changing color between two panels, a yukari did it.
>>23141 imageboards don't strip NAI's metadata yet it's just kawakami rokkaku >>23149 free
Been out of the loop, there's a new NAI? No chances of a leak I guess?
>>23174 not really, and even if there was it's an explicitly fried SDXL model, so it's almost worthless for local
>>23175 >so it's almost worthless for local now that's a fucking pathetic cope
>>23177 t. has never tried tinkering with SDXL
>>23175 holy kek localoid cope has reached a new high
>>23179 Sorry I can't read your message, you probably tried to inpaint something and the whole thing went trough VAE, destroying detail and text.
>>23180 You need to be funny in order to tell a joke.
>>23181 I can just make a lora for that.
>>23182 How about you make a lora to get rid of your virginity?
>>23183 I did it a long time ago. At least I can do it for free.
>>23184 Neither trannies nor fast food gloryholes count.
>>23185 >>23186 That'll cost you twice the good boy points.
>>23187 One more thing I can afford that you can't apparently.
>>23188 Damn, we finally have an explanation for >>23120
>>23189 Was about to say you forgot the punchline but then I remembered you're both the joke AND the punchline. Maybe you're some fourth worlder ESL.
>>23190 >posts /vt/ reaction image >"why is /vt/ relevant" lmao
>>23191 >he thinks the world revolves around imageboards Terminally online behavior.
>>23192 t. uses vtuber reaction images
can you little queers get a room
>>23193 >he's malding over vtubers now >thinks everything is 4cucks Terminally online. Call your parents, ask them to open the basement's door for you and let some fresh air in, you've been breathing the same stale air for months and the lack of oxygen has rotted your brain. You can skip the first step if you manage to get up by yourself but I doubt it.
>>23195 >all these words to shill a paid online service
>>23196 I've never told anyone to subscribe and half the thread has been V3 posts, update your strawman. :^)
tests guy updated the rentry with facial expressions, pretty good stuff "fingersmile" is extremely gacha though
>>23197 >nooo actually my shilling is totally just like people posting their gens lmao
>>23199 Quote a single post where I've explicitly told people to subscribe or spend money on NAI in any capacity, ESLoid.
>>23198 this seems to be a sampler issue in part
>>23200 >ESL ESL ESL Are burgers this insecure?
>>23202 Are ESLs this retarded?
userscript anon would it be beneficial to add random delays to the autosave and autogenerate features? just in case NAI measures the delay
want to make LORAs again, any major updates to easyscripts or the way we make LORAs now?
>>23206 it's pretty fun
>>23206 How does it work? I've never tried it. Can I have it write out an entire episode of Top Gear?
>>23207 Yes, a fun way to explore "taboos". >>23208 >Can I have it write out an entire episode of Top Gear? Kek, maybe? They have a few different text models it seems but I have only used one of them and all I have fed it so far is degeneracy. All that I'm basically doing is giving it a "prompt" and changing what it writes to steer the story in the direction I want it to go, pretty enjoyable and hot.
>>23208 it's a completion model trained on literature with a functional but unremarkable instruct module trained on top. meaning it's best suited for continuing whatever you feed it, including tone and quality. the /aids/ thread has a prompt library at aetherroom.club like their imagegen model, it isn't the biggest or strongest, it's just uncensored and pretty high quality for its size.
(1.26 MB 832x1216 g41e6r8.png)

(1.24 MB 832x1216 5h1e6.png)

(1.21 MB 832x1216 41gher5h.png)

(1.34 MB 832x1216 15reg.png)

remember: harvins are made for sex
>>23210 >like their imagegen model, it isn't the biggest or strongest Is there a higher quality anime-focused SD service out there?
>>23212 uncensored dall-e 3 is supposed to be pretty wild if you can somehow wrangle access
>>23213 Haven't seen too much dall-e 3 anime stuff and what I was able to see wasn't too impressive tbh
>>23214 just going off hearsay
>>23211 cute
>>23215 Might look for more, genuinely haven't seen a lot of anime from it
>>22819 Nobody cares but the partition is still there - however I'm a brainlet and I couldn't figure out how to recover it with testdisk so I've resorted to disk drill, hope it actually recovers everything or I'll have to fish out the old system out of the garage.
>>23217 most dall-e 3 generations come via bing, which is heavily filtered and degraded. not sure where you'd go to find the purestrain stuff.
>>23218 when i had a dying HDD i used macrium reflect to mirror what i could onto a new drive
>>23220 It's not dying, the HDD worked perfectly fine while it was still in my old system. Something fucky happened when I pulled it out but the partition is still there according to testdisk, it even has its name. Just says it can't recover it and the program insists it's 1000GB instead of 931. I'll probably plug it back into the original system at some point and do a proper backup but for now I'm letting disk drill run, taking what I can and then leaving the full backup for another day.
>>23221 oh god the timer is going up to 10 hours
>>23218 >however I'm a brainlet and I couldn't figure out how to recover it with testdisk I also had this issue (pulled out a drive out of its USB dock while it was busy) and was able to recover the files. TestDisk is NOT very brainlet friendly
>>23224 It complains about the drive being too small and to check either the BIOS settings or the jumpers - never had any weird BIOS settings IIRC and I've never had a single jumper on this thing. The timer for disk drill went down to nearly 5 hours but it only found 34.5gb out of 930 (yeah the drive was full) The original windows install is fucked but I wonder if plugging it into the original motherboard and just winging it with a live troonix distro or sticking a temporary windows install on another drive might do the trick. I have no idea this is the only drive out of 4 that "failed" this way, it was perfectly fine even the day my windows install crapped out. Just left it there for two weeks before pulling the drives out, maybe I should've done it sooner.
>>23225 also the drive was much, much healthier than my other WD drive (a bottom of the barrel WD green) and that one worked fine
Been trying to do fizrot, sometimes it works amazingly well and sometimes shit just breaks and the faces and details just turn into mush.
other people's research artist:(artist) is how the model was trained. if you're doing a specific character the artist doesn't often draw, character should go before artist official art is contaminated by vn slop, key visual is better but you gotta stick key in your negative or else it starts adding keys
>>23233 >artist:(artist) is how the model was trained So I should do it like artist:(fizrotart)?
>>23234 artist:fizrotart or whatever matches their danbooru tag, like artist:gawain (artist)
>>23235 how did the guy find out that it's the proper syntax?
>>23236 asked nai staff on their discord
>>23237 well shit that was easy dumb question but i gotta emphasize the whole thing and not just the artist name right?
>>23237 Did someone ask them if they'll keep updating the model periodically just to keep the dataset up to date?
spent half the day trying to expand on typo's asuka getting molested by prototype evas series. it can't do prototype evas very well but the monster tag is imaginative. >>23238 no idea, but i doubt you HAVE to given that artist tags often work even with the wrong syntax, just might not work as strongly as you want. >>23239 haven't seen it if they have, but my guess based on their history is they're cranking towards making their own model and if any updates to v3 come, it'll be part of testing in preparation for baking it like v2 was.
>>23240 will test it a bit, thanks it's shitting the bed right now, probably high traffic but i managed to snatch these
(1.63 MB 1280x1280 00002.png)

(1.64 MB 1280x1280 00008.png)

>>23074 fellow duel monsters enjoyer i see
>>23240 it's weird, despite using the right syntax nearly half of the time i get nothing that resembles fizrot, stuff just turns into mush not the only style it happens with either
>>23244 i haven't had that problem. how many tokens is your prompt? prompts get broken into chunks of 75 tokens and for whatever reason nai3 is currently ignoring token 76 and 77. tags that straddle the 75 token line will get split into often semantically meaningless bits as well.
>>23245 nowhere near the token limit, don't think i ever got close to 70
>>23246 weird. i've found one or two styles that seemed brittle, like they were easily broken by quality tags. is it turning into a normal prompt or just incoherence?
>>23247 both? it's extremely incoherent, sometimes it will get it perfectly right, sometimes it will get the body right but the face looks more like a rough sketch or just mush, sometimes the entire style looks like a normal prompt, etc see >>23231
>>23248 I think NAI is worse at 1girling compared to local most of the time, especially if you like those 3Dish styles.
>>23249 >I think NAI is worse at 1girling compared to local most of the time Works fine for me style shenanigans aside >especially if you like those 3Dish styles. Why would it matter? The style is on Danbooru and is obviously recognized. This isn't the only style that shits the bed randomly, other 2D styles do too.
holy shit the amount of schizophrenia posting caused by nai is too much
>>23168 fucking lol'd, I love these
>>23239 At best, the dataset was updated from db2019 to stay up to date with the DNP list.
>>23254 Some tards on 4cuck said it was updated in september idk if I believe that though
Do you guys know how to easily mix styles in nai v3 i tried to do prompt mixing but i couldn't make it work
>>23257 just list them with commas like artist:artist1, artist:artist2 simply listing names without artist: prefix work too, but I feel like it's kinda inconsistent you can reduce emphasis by using [] or putting the artist name at the end didn't try mixing with | it's kinda weird, will try to figure it out later maybe
>>23235 >>23237 Does this apply to anything else? Like character:charactername? Was it trained with any other meta tags like rating? Does rating:explicit do anything?
>>23259 character:charactername, yes. other metatags were transcribed: best quality amazing quality great quality normal quality bad quality worst quality are score very aesthetic aesthetic displeasing very displeasing not sure, i wonder if they ran a classifier on their dataset to generate these rating:explicit is nsfw
>>23260 haven't tested it yet but I've heard that very aesthetic from the auto quality tags will interfere with artist styles but all the other tags are fine
>>23248 NAI 3 is pretty touchy when it comes to prompting characters with artists, especially artists with detailed shading/2.5d. Putting the artist at the very beginning of the prompt is probably the first thing you want to do. Raising emphasis on the artist and lowering emphasis on the character works as well, as long as you are fine with prompting back missing details like hairstyle, clothes, etc. Also a tip for detailed shading/2.5d, put sketch and/or flat color in negs
>>23261 yeah none of the tags i've tested are universally applicable. just stuff to twiddle with.
>>23264 Can you paste your prompt? Seems like you got it to be more consistent then I ever can with position
>>23265 You can extract the metadata from all of them from the catbox link https://files.catbox.moe/6mxo6p.zip These images are the last four and I think are the best ones - I've since noticed that increasing the strength on my positive quality tags from {{{ to {{{{ is (IMO) a small improvement and it gets worse afterwards.
(1.40 MB 1088x1600 naiv3testingu.png)

I miss style LoRAs.
>>23269 Why?
>>23270 Feels like I have more control when mixing different styles.
>>23271 You can have control over those by reordering prompt and using emphasis but yes it's a bit awkward
>>23272 That's what I've been doing but I'll give it another try after I'm done with textgen.
>>23269 I miss my fizrot/robutts/takorin style but I don't miss the inconsistent anatomy
>>23274 Understandable.
Style mixing in NAI 3 is a bitch. Prompt order makes a huge difference compared to weight and then changing the seed can just fuck everything.
Maybe if I spam enough V3 fishine art she?'ll notice and make a sequel
>>23276 Yep this has been my experience as well, i's inconsistent with style mixing
>>23276 Cute
Ah the downfalls of prompting artists
>>23280 FEET
>>23282 What about eiji
>>23283 I tried him, he works fine but I don't have any examples rn also based eiji enjoyer
>>23285 Post some when u can. I like his style, I think I'm gonna train a Lora from scratch cuz the one on civitai gives me el cancer
V3 struggles with Shaw but it's nothing some light to moderate wrangling won't fix. Don't think the outfit will ever be 100% accurate but whatever Really couldn't find an art style for her but here you go, helmet detail is fucked but that's probably kamepan's fault
honest question: why are you fellas comfortable generating loli on a service that has your credit card info
>>23289 Because I don't live in cuckstralia, cucknada or the cucked states of weimerica
>>23289 not illegal where i live
>>23290 NAI was founded when people found out AIDungeon user generated stories were on various "freelancing platforms" or whatever you wanna call them, where they were being tagged as good or badthink by the general public (probably causing a number of traumas). The entire reason people use(d) their services despite lower quality models than the competitors was trust that they'd stick to their privacy policies, both for their and their user's sake. Though I haven't really kept up with them for the past year or two, I'd assume nothing has changed about that. go search for the last few generals on /vg/ that were still called /aidg/ if you wanna see despair. I think there was a rentry documenting the entire thing too.
Best I can do to get takorin to work is '{{{artist:namako (takorin)}}}, {{{pixel art}}}' (pixelated seems to do more harm than good) It mostly gets the idea but it's not perfect, if anyone's got any suggestions I'd love to hear them. Using lower resolutions doesn't work, no.
why in the goddamn fuck is the sideboob tag forcing a view from the character's left side 95% of the time?
>female ejaculation either doesn't work or turns the girl into a futa What the fuck?
(1.26 MB 1024x1024 fvc67yvuoi.png)

(1.26 MB 1024x1024 g9bu.png)

(2.19 MB 1280x1856 r456dcxr.png)

Tried my hand at a sequence of sorts, happy with how it came out for the most part, though I wish the gapes looked better.
>>23296 Try with "female orgasm", might be important for it to work properly
very good results were obtained
>>23293 >>23294 I highly doubt they trained on gifs and they're probably still filtering out or giving less priority to low resolution images. The LoRA does his style a ton better than this. Unfortunate.
>>23301 you can refine the style with local
>>23301 What I got is nice enough, I like this "high res" takorin style but yeah, local gets much closer to the actual low res pixel art (though the details just turn into mush and start bleeding) Will probably keep using it like this just for shits and giggles, had fun trying to recreate some of the gif's frames 1:1
>nai api plugin for auto1111 is cool but buggy >maybe it's the webui, maybe i should update it >check the bug tracker >no new open issues in two weeks, cool >git pull >everything's busted >can't even load models >check the bug tracker again >i was on page 3 open source software, why
>>23305 and in case anyone else has this problem, i had to re-enable all the default plugins
I have no desire to coom and I can't tell if my libido is gone or if the improvements to raw T2I brought by V3 are so good that I'm more interested in doing all the SFW stuff that would take me hours to do on local between rerolling and inpainting.
>>23290 >>23291 It's not illegal where I live, but you know this shit is stored/save for perpetuity and can be brought out when politics/culture shifts enough right? Not to mention errybody getting oprah winfreyed with breaches these days. >>23292 Breaches, legal/financial pressure, and changed management would like a word. No clue if they're publicly traded, but that would be extra cause for concern.
that's literally insane. that's just straight from NAI with no inpainting?
>>23298 that's literally insane. that's just straight from NAI with no inpainting?
>>23311 yeah it's insane how people have the freedom to gen anything (and even pay for it) and still choose to generate niggers fat niggers even
>>23309 >It's not illegal where I live, but you know this shit is stored/save for perpetuity It's not. >and can be brought out when politics/culture shifts enough right? Israel is shitting itself right now. What you SHOULD be concerned about is the OpenAI jew getting replaced by an even worse jew.
uh... can you actually do darkness other than night? friend wants to do an office with dim lights but we can't figure it out
What's that one thing the furfags implemented to eliminate the need for prompt ordering?
>>23318 Alright, how do I upscale/enhance this properly? Switching samplers might fuck it up
>>23319 i use prompt guidance rescale 0.2 but otherwise that upscaling has a lottery too
>>23320 What about strength and noise?
>>23321 i've played with them and always came back to 0.5 and 0 haven't done a whole lot of A/B testing because it's limited, of course
(1.47 MB 832x1216 2b6d.png)

(1.04 MB 832x1216 1gn3d5.png)

(1.11 MB 832x1216 1b 5fd.png)

(1.55 MB 832x1216 g15rw.png)

think I've finally settled on settings that I like, took a few days
>>23302 Oh true, didn't consider that. Guess the resolution at high denoise should be a non-issue now with kohya's hires fix as well. iirc someone modified kohya's technique to make it even better too. >>23319 I noticed Euler works a lot better for upscaling in NAI for some reason
So... NAI (v3) still has that very fun bug where all of a sudden it just bugs out and shits itself hard, seemingly ignoring tokens (below 74/75 yes, I'm aware of the new bug) and just generally shitting the bed in terms of composition, anatomy, etc. This happened on paid V1 (genuinely felt like the AI had its good and bad days) as well and especially on local where you more or less needed to restart SD or even the entire machine to fix it. Has anyone else noticed this? At this point I think it might be inherent to Stable Diffusion itself somehow, it's as if something just fails during part of the process.
>>23326 >too puffy when it starts reminding a ball sack
nai inpaint via a1111 plugin is really good model released two weeks ago
(1.30 MB 1024x1024 test.png)

Test, sorry to whoever made this, I need a reference both on 4chan and 8chan
>>23025 Updated the 8chan userscript, now supports NAIv3 metadata, didn't do much testing as 8chan is rather slow via my VPN right now
>>23333(me) Also what exactly caused the quality of gens to skyrocket? I've been away for a month or two, any quick rundown on NAIv3?
>>23334 they cooked
Which NovelAI tier should i go for? And how much better is it at actually knowing characters without loras?
>>23339 for pure imagegen, it's sadly $25 or bust. in my testing it can do any character with ~150 tags on danbooru without much difficulty. it's not wildly better than a high quality lora unless you really like greebled-up gacha girls and you NEED her to have a correctly depicted antikythera mechanism for her right eye or else you can't get hard.
>>23340 Any idea when local will ever catch up, if ever? or will i have to be a good little goy?
>>23341 the only obstacle for local is scraping up some cash and some volunteers to do a lot of tedious scraping and data-setting so i hope you're okay with furry models because that's all you're getting lmao
>>23000 Interesting the stealth metadata survived image editing, guess it will be fine as long as you don't touch the left edge of the image
>>22690 Anon how come your image contains stealth PNG metadata but not in NAIv3 format? Is there a local environment that supports that?
(1.06 MB 832x1216 catbox_e883ij.png)

(2.98 MB 1856x1280 4fhd8.png)

>>23345 even micropeens need love
>>23342 >the only obstacle is scraping up "some" cash you're gonna be in for a very, very rude awakening
>>23341 max is june next year the furry is talking about offloading his furry model to cloud intermittently to train the anime model, so we'll get earlier but slower epochs (the first few won't be usable yet, we'll need to still wait a while) currently 1.8M images from gelbooru
>>23337 so cute. catbox plz especially for the second one.
>>23351 >max is june next year
>>23353 >local can't catch up because uhhh >IT JUST CAN'T okay? if it's not a corpo paywalled model it CANNOT EVER be good
>>23354 Nice strawman, schizo. Made it up all by yourself or did your imaginary friend help you?
>>23353 >8mb for a 498x498 gif server space well spent
>>23354 Why did you suddenly sperg out like that
>samefag
fagfag
>seething
huh?
huh? huh? huh?
(184.90 KB 1095x671 Untitled-1.jpg)

>>23352 If you are looking for metadata, it's encoded in the alpha channel of the PNG file, you may use the now updated 8chan userscript in OP to extract it in browser
(1.71 MB 1024x1280 catbox_yna7e8.png)

(1.70 MB 1024x1280 catbox_zwgdnw.png)

>>23363 Why is the icon sometimes green and sometimes not, even thought it still works? Is it filename based?
>>23365 It's file name based, currently the same logic as the 4chan userscript, a green icon means there's higher chance it has metadata (uploaded to catbox / has obvious NAIv3 file name)
(1.27 MB 832x1216 nene.png)

Does anybody know if there are any rules to the novelai api? i'm using a script to generate images with random prompts but i don't want to banned if i can avoid it.
>>23368 some glowing users on 4cuck posted a d*scord message from NAI devss saying any automation (ie: not you pressing the generate button for each and every gen) is forbidden the actual limit is up to them
>>23369 would a script that simulates me clicking and inputing the text on the page be allowed then?
(4.20 MB 3840x2336 xyz_grid-0025-1354257554.jpg)

With the power of the new hll lora and my shitty furryrock fizrot lora I now have... an ok fizrot style on easyfluff
>>23369 >hey, don't be retarded, they said automation is against the rules so don't abus-- >GLOWIE GLOWIE GLOWIE
>>23372 at some point you have to assume this kind of shit isn't accidental but rather part of a deliberate, orchestrated scheme to reduce useful words to meaninglessness
>>23371 Huh? Can You use a Lora that is trained with NAI? Or you'll always need to train the Lora first with EF to work properly?
Here is a dump of (almost) everything I've made using easyfluff+hll6.3 so far https://catbox.moe/c/74ogn0 It's obviously a huge jump in quality from NAIv1 based checkpoints, here are downsides I've observed: It's very high quality, to the point where it kind of looks like actual art, lots of images might seem to be missing a lot off telltale signs of AI art such as high contrast. Might just be due to being based on furry art from the start, but it seems "realer". This means that if you have a particular fondness for how the style of your favorite NAIv1 based checkpoint is, it WILL take some getting used to. I for one really liked my custom based mix and I do miss it It's really hard to get the usual nearly-#FFFFFF skintone that is common in anime. I literally can't do it, weighting pale skin very high gets you close but then the skin is also very saturated Most of your results will look like they need a contrast boost, with some exceptions depending on which artist style you're prompting for. The image filters extension works for this and I basically permanently have it on in img2img with just a +1 boost for when I upscale my base images x2, you might also need a color boost Your mileage with style loras (trained on fluffyrock) will vary, probably a lot. Probably owing to the training model being primarily focused on furry art, style loras are dampened and maybe this can be offset by higher learning rates but I can tell you it's not really offset by adding more epochs Both character and style loras are dampened by hll itself. This is also true for styles when it comes to the NAI version of hll. Do note that adding any of the quality tags like masterpiece and such will force soft shading into the image, you might have to play around with the intensity of 'lineart' 'flat colors' 'masterpiece' 'best quality' and 'digital media (artwork)' to get a style more accurate to a given artist or character Concept loras are even more of a mixed bag, my main experience is with clothing but it might seem that they aren't even on with some gens. Character clothing in character loras are also affected Easyfluff (with hll at least) seems to have a habit of only generating an upper body image unless you explicitly tag something that isn't on the upper body. i.e even 'pubic hair' isn't enough, you have to add 'pussy' as well to get that in the frame sometimes. Due to conflicts between what furfags thing are big titties and what animefags think are big titties, you might find breast size 'swinging' between gens Easyfluff (with hll at least) seems to have a very very hard time making it so the character you prompt or use a lora for is the 'doer'. Even with controlnet, I rarely got a gen that properly depicted a taker pov of getting fucked by the character. Without controlnet, the character was always the one getting fucked no matter who had the dick. This might not be an issue with yuri but I don't prompt yuri The easyfluff+hll combination has very high highs, but lower lows than the folded 1 trillion times NAI mixes. While modern NAiv1 mixes basically generate a usable image every time and it's about that search for the best one, you'll run into quite a few gens that just have broken or phantom limbs, hands in the completely wrong orientation, etc. I feel like this is a symptom of hll and easyfluff clashing since they are wildly different This is like 80% of the way there, and has fully convinced me that we really do just need someone to make a truly modern anime model. SD 1.5 can still do so much good.
>>23375 >This is like 80% of the way there, and has fully convinced me that we really do just need someone to make a truly modern anime model. SD 1.5 can still do so much good. forget the slippery (cope) slope, it's a fucking cliff at this point
>>23376 how much are they paying you
>>23377 more than you can crowdfund in a year but that'd imply getting the crowdfunding started in the first place, good luck
>>23374 The fiz rot lora is one I trained on FR but it isn't very good. My experience with training style loras on FR is if the artist has mostly furry content a style lora will probably work decently well but if it's mostly non-furry then you will probably have more trouble. It seems almost the opposite of NAI where training styles is simple but training characters is more complex - the opposite seems to be true for furryrock. >>23375 EF+HLL 6.3 is functionally is like a "mini NAI v3" with the ability to prompt artists, though the scope of promptable characters (and artists) are pretty limited. It manages to be much better than previous iterations of HLL but still far behind NAI v3 levels of consistency. The few advantages it has over NAI v3 is not having to worry about prompt order, plus the pool of tags that the furry models has that Danbooru doesn't; though this mostly applies to specific fetishes more than anything.
>>23375 NTA, but putting this here too since it's on topic a gallery with 300 combinations of 3 artists (weights 1.1-0.9-0.8) recognized by the HLL lora on EasyFluff, so that people can get started easily by picking mixes they like and then tinkering (I'm like that, so why let make people enjoy) catbox gallery : https://catbox.moe/c/6etv3n 299 pics, one fell through during upload and I'm not going to find out which. And the zip with all of them : https://mega.nz/file/9iZRXAab#LnK6xDe-2ca-5ndFT09uCrgz5pboVDm-YWdU2HGLZhg this bodes well for the future furry endeavors
>>23380 >so why let make people enjoy apparently had a stroke when writing that, meant >so why not let people enjoy it too
(2.54 MB 1280x1920 catbox_yg3xjh.png)

(2.67 MB 1280x1920 catbox_t1w78y.png)

Turns out my shitty FR mousouduki lora also works well with HLL6.3, so I guess I might jump back in the lora training game for a little bit
moved the past year of txt2img to a backup drive and man watching the thumbnails go by really drove home how hard things stagnated
>>23383 Shits sad but NAIv3 isn't really making me gen more.
>>23384 if anything i'm probably genning less because i like picking out the good ones and tinkering with them in img2img and photoshop. looking back i had like a 1.5% hit rate across all of sd and now i'm getting like a 20% hit rate. i already have 900 something saved in about a month. it's just too much shit to want to actually work through. terrible problem to have.
>>23385 Well I'm basically doing the same thing as you but I do have a huge backlog of gens that just needs a little bit of love that I'm slowly getting through.
Tried to get a fair comparison of NAI 3 vs hll6.3. Stole this prompt from /h/ and wanted to gen more of it so figured it would work well for a comparison
>>23387 Damn, and you added plenty to that prompt. SDE native with smea looks surprisingly good, I thought I settled on Euler already.
>>23388 I did try to make it fair with similar prompts but you can easily get a gen just as good in with a much shorter prompt with NAI 3 smea has really good detail without upscaling but I stayed away from it because it looked terrible with Euler unless it was used for upscaling. For some reason I decided to try it with the other samplers and it just kind of works with SDE
Does anyone know how to get tag autocomplete to work with e621 tags? I want to give HLL a try but I'm only familiar with booru tags
>>23390 figured it out, nvm
(1.23 MB 832x1216 catbox_okar58.png)

Honestly hope this shit get's leaked
(1.02 MB 1216x832 catbox_c12nwu.png)

do we know exactly how NAI3 is so good?
>>23392 B A S E D
(1.01 MB 1216x832 catbox_pchdcy.png)

(1.08 MB 1216x832 catbox_qu3ahe.png)

(1.20 MB 1216x832 catbox_1be573.png)

>>23395 it's blowing my mind that you can be specific with the guy now
>>23393 No idea, I've always questioned the anti-XL posters and their "uhhh we don't need a bigger unet or more params 1.5 can fit so much more" cope even before V3 but now? Until they can actually show results I'm just going to assume that being XL-based is at the very least not hindering V3 as a model. I'd love to know how it can do up to 3 girls consistently (in landscape anyway, feels pretty inconsistent in portrait res) without too much gacha though. Latent couple even without controlnet would probably be even better but this isn't bad at all.
(3.18 MB 1280x1920 00060-4116276009.png)

(2.93 MB 1280x1920 00040-1434655841.png)

(2.86 MB 1200x1800 00045-3813543283.png)

(3.13 MB 1280x1920 00036-2407499898.png)

I didn't wank my schmeat this much since novel ai leak.
(1.18 MB 1120x2240 00036-557917225.jpeg)

Interesting, furries finally got me impressed, the lora I made with the traditional method mixed along with easyfluff and the easyfluff hll6 lora blends well together. Might make another model, if the furry anon is here how did you even do model mixing with vpred stuff?
>>23398 really nice lighting in the second
Do you have to prompt with danbooru tags or e6 ones since you mixed both?
>>23402 I have the e6 or the furry tagger installed not sure how to make it work with the auto tag complete extension. Aside from that no, I used traditional tags unless furry tags help even more
>>23398 Have you tried to do your old partially submerged bathtub prompt in novel ai yet, it was one of your best ones.
>>23400 nice eyes anon :)
>>23405 I know she has the scissor pupils, pain :(
(1.15 MB 832x1216 catbox_dgsqab.png)

genuinely fucking insane
GET LEAKED GET LEAKED GET LEAKED GET LEAKED
(1.31 MB 832x1216 catbox_9gaqmj.png)

(1.26 MB 832x1216 catbox_escdja.png)

scenes and concepts that would've gave local a stroke are pulled off by V3 without a sweat how?
(2.51 MB 1280x1920 00028-1424610781.png)

(3.63 MB 1280x1920 00124-2472389375.png)

>>23401 Thank you! I'm going to keep fiddling around with this HLL stuff. >>23404 Anon, I'm not the Meido anon, I mostly post Chitose gens. That being said, I also want to see what those prompts would look like in there!
>>23410 Honestly looking at that second pic you can't really blame for thinking it lol
>>23409 Like I've said in >>23397 I have no fucking idea but Reisen being the dominant one in the Junko-Reisen pairing is something I didn't know I needed until now.
>>23411 Hahaha either way it's no shame to be mistaken for Meido. And who doesn't love angry maids? >>23409 Too many things happening at once in those images. Local needs work
I'm only an anti sdxl anon because of all the retards outside of here shilling it hard and even worse LORAs being posted, otherwise it just took people who actually knew what they were doing to make a good model out of that concept, too bad it needs to be leaked to make it even better
(942.31 KB 1216x832 catbox_g7du3o.png)

(939.77 KB 832x1216 catbox_5wi9cw.png)

(1.30 MB 832x1216 catbox_2ca3j9.png)

(1.18 MB 832x1216 catbox_lkifm5.png)

>>23413 split the bill and sharing a sub with a friend. no wonder they can get away with charging $25
(928.88 KB 832x1216 catbox_v8yeb9.png)

(798.32 KB 832x1216 catbox_iwykeq.png)

(1.16 MB 832x1216 catbox_c7ng54.png)

>>23412 yeah anon, the local gacha was such a pain in the ass. but now it gives you pleasant surprises
>>23397 Local SDXL is still a dead-end though, awful to train on and there are pretty much only pajeet-level finetunes out there, completely irrelevant for anime. Don't know if 1.5's unet is actually "enough" or not. Most of the hostility towards XL was due to the shills peddling it for several months and posting absolutely subpar results.
>>23398 Lovely, more fan art of my wife and also more Chitose. >>23404 That's a really good idea, the bathtub prompt is actually one of my fav ones but I somehow forgot about it. I'll give it a try when I get home from work.
working on Based68 and finally implementing vpred, got inspired by the hll6 easyfluff combo and supermerger's latest update has no bugs for me compared to the time I was using it for Based67. Will post a link to the first test model here, no MBW this time
>>23402 >>23403 hllanon posted a tag file when he released the model, but uploaded it on litter it includes all the tags actually used in the lyco, which he tagged from danbooru, the typical autotagger, and the e6 autotagger https://files.catbox.moe/q5bjw2.csv downside is there are no tag corrections like the danbooru one has
>>23421 >vpred haven't been in here for a long while but does this actually mean anything major in terms of improvements? it's still gonna be 1.5 slop but *maybe* a bit better, right?
>>23423 It should alleviate the need for noise offset at the very least. It seems to work well for furryrock and NAI v3 but maybe there's a reason why it's taken so long for this to come to a 1.5 NAI mix.
>>23425 nicee now add the soap bubbles
>>23426 didnt it fuck other lora's up?
>>23427 Sure, I do actually prefer the rose petals myself though.
(3.82 MB 3840x2237 xyz_grid-0000-2714778245.jpg)

>>23428 Considering my old loras seem to still also somehow work with hll6.3, I don't think so. Though they might still benefit from being retrained
>>23430 can furryrock be used out of the box with any lora I have or do I need to do some black magic?
>>23424 cute
(772.67 KB 640x960 catbox_2yx0e0.png)

(420.05 KB 640x960 catbox_k0zmcw.png)

(847.45 KB 640x960 catbox_8jmpjy.png)

(390.20 KB 640x960 catbox_7slldy.png)

>>23431 It's hit or miss, characters mostly work but not as well as they would on B64. Styles only really work if they supplement something HLL already knows. It also tends to get more washed out them ore loras you add, especially on top of HLL
>>23434 nice guess only dick lake is left now.
>>23435 kek, I'm gonna leave that one for another day. I'm also done with my spamming, for now.
>>23397 >assume that being XL-based is at the very least not hindering V3 as a model I wonder what difficulties they ran into though, I remember someone mentioning they almost gave up on finetuning XL before running into the V3 we have now.
(1.23 MB 832x1216 Untitled.png)

>>23336 I love how v3 has an understanding of before/after dakis.
Anyone sometimes getting gens like these with NAI?
man it's nice to have decent hands. >>23439 i get red now and again. feels like the model just fails to resolve an internal tension in the prompt sometimes and shits out some weird stuff.
>>23438 That's hot.
>>23439 Like >>23440 said it happens sometimes, happened on Web V1 and local V1 too, SD kinda shits itself
is there anyway that the hll6fluff anon can just merge that lyco into easyfluff
(1.12 MB 832x1216 catbox_pvih2e.png)

(1.49 MB 832x1216 catbox_0l3olw.png)

(1.20 MB 832x1216 catbox_pcgq9g.png)

(1.30 MB 832x1216 catbox_4tgnlv.png)

Based68 first test for anons here, link will be down in 3 days (I think that's how it is for a non-super active gofile link) would appreciate if Easyfluff anon took a look at it because I believe I un-furryfied it with the combinations I did but it is still a v-pred model that works better clip skip 1 compared to other anime models gofile.io/d/nAycRY of course you will need this extension set to 0.7 to run it https://github.com/Seshelle/CFG_Rescale_webui
>>23448 btw it was only 3 combinations I did with supermerger compared to the schizo shit I did with previous based models, it was basically all train/add difference with easyfluff and the version of Based67 that resembled based64
>>23448 I just hopped inside bed fuck
>>23446 I guess you would probably want to use supermerger. Related, someone on vtai converted EF+HLL6.3 into TensorRT to make a big artist comparison which I should repost here Full image https://mega.nz/file/NPJ3xATI#NM7vNuukCdk0FVA41y-j_OUFd920yECWaddFi8mzDZ0 Individual images https://mega.nz/file/AahDXLJI#vlDZf-_8P0o6-Gybyg6Da13d2JnqUneQDEIYSuO_ils >>23447 I wasn't ready for loli Mirko but I'm glad it exists, reminds me to gen some Eri
>>23451 tried but I just get errors, like no attribute up or some shit, otherwise tried merging another locon into easyfluff and it just fried it, not positive if you need train difference all the time when merging Vpred models but oh well I'll just keep an eye on what the fluff anon is doing
>solo, (male character), loli it really is that easy
Probably not going to be the official version of Based68, might need to play with some of the merging settings because it seems a bit too contrasted and sharp. Might also want to find a way to put hll6 into the model without overbearing the Based model I used at the start
>>23448 Tried the mix and it outputs only full brown image when im loading this in v-prediction mode with config, but works normally without it. Obviously it couldnt generate a black square with prompt without putting something white in the half of the image. Am i doing something wrong or the checkpoint doesnt have vpred?
>>23456 it worked when I used it, cfg rescale is the only way the model properly outputs too. Not sure I'm trying again but otherwise I'm not sure if there's a way to get supermerger to make a proper yaml for the model to use v-prediction like with easyfluff
>>23457 I think I know what happened, the yaml from easyfluff was not working because it was not model a during the supermerge, oof going to delete the file and go back to the drawing board
>>23448 >0.7 I use it at 0.4 usually, but that's the low end. Usually enough, not much contrast loss
So did we find out why you get a different result after changing models compared to when you launch the webui with the model already loaded? Can't find anything on that anywhere.
>>23461 >why no but it's not just using the config or whatever of the other model, which is spooky the furries actually improved one of their models by a noticeable margin thanks to this, because whatever residue is left from loading a newer revision (but early epochs) and then loading an older revision (fully trained) fixed glaring issues of the old revision in the end they cosine merged the deepest unet block of the old revision with a fraction of a percent of the new revision and it made a much better model
>>23461 >>23462 Found something about it, Supposedly it happens when you use loras. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13516
>>23463 Nah it happens even without it. I talked about it here a few threads ago, though I don't recall exactly when. I think it's related to caching models in ram, but I couldn't confirm whether or not the furry debacle took place with or without that on.
>>23461 I could do a git bisect but it's a pain in the ass for this scenario. I'm sure nobody else has gone out of their way to work on it for the same reason.
>>23442 very cute results, what model was used here?
(1.24 MB 832x1216 v7y6b.png)

(1.09 MB 832x1216 b877bu.png)

(1.13 MB 832x1216 bn89b.png)

hapu sexoooo
(2.12 MB 960x1440 catbox_j5gz26.png)

(2.07 MB 960x1440 catbox_y7ohtb.png)

>use hll6 on easyfluff >load up 4 nai loras (character + 3 styles) >results are more coherent than on NAI itself I'm going to shit myself. Obviously there are issues like the colors getting washed out but I think diffusion models have a much higher degree of flexibility than anyone has managed to take advantage of.
(1.43 MB 1216x832 catbox_ehb52z.png)

(1.29 MB 1216x832 catbox_z82k0r.png)

i got no idea how the sauce was made. things that would make local piss and moan and turn into an amorphous blob just work in nai
>>23471 uh oh you just tried to buy the rocket launcher in ADiA without having like 50k on hand
>>23471 it's because it's an XL model which has almost 7 times as many parameters as sd1.5 on local, it could be so much better though if they equally focused on text encoder and unet instead of just making the unet massive and marginally improving tenc by using openclip..............
(1.15 MB 832x1216 catbox_6oosed.png)

>>23473 redpill me on text encoders. apparently dalle3 was so good because it used gpt in a similar manner?
>>23468 nai3
>>23474 From a layman's perspective, the text encoder is what breaks down your prompt and converts it into numerical weights which are then sent to the u-net to actually assemble an image. SD1.5 used CLIP which is a very rudimentary neural network, SDXL uses OpenCLIP which is better, and DALL-E 3 uses a LLM (some pruned version of GPT) instead of a rudimentary text encoder which is why it can comprehend really crazy prompts with lots of subjects.
>have to merge easyfluff first with train difference to get vpred working >doesn't look as good as b67-p7 being the base model >try normal A-B model merging >even the yaml can't save it from looking washed out didn't know vpred would be such a pain for merging, if the guy that made the extension is here please look more into it
>>23476 this is what i really want to see from nai4. they even have their own reasonably not-retarded 3b text model to make it light.
>>23476 Lol u serious? I thought it was more complex. Anyone can now make a custom GPT model without needing to be big brain
>>23457 > cfg rescale is the only way the model properly outputs too Well, it should at least output something, not just brown image? It seems like i was loading non-vpred model in vpred mode > to get supermerger to make a proper yaml for the model to use v-prediction like with easyfluff AFAIK theres no need to change much in the config, just parameterization: "v" in the right place? >>23464 What was your settings for caching? I've had like 1 all that time for everything and still experienced that sometimes i couldn't just reproduce the image fully, i thought it was due to high batch size generation
It's been a very long time since I've looked at these threads. Are there any big new metas lately?
>>23481 nai3 seems pretty interesting. But it's not local and it's not free.
>>23482 I'm looking through the thread and seeing some pretty good stuff. Is the $15 one enough or do I need the $25 one for good results?
(1.29 MB 1280x720 catbox_lzr95d.png)

(1.27 MB 1280x720 catbox_uococr.png)

(1.13 MB 720x1280 catbox_54rg3u.png)

(1.13 MB 960x1280 catbox_gxff41.png)

>>23483 $25 gets you unlimited gens. There's also the easyfluff+hll6.3 combo which is a nice alternative
(2.80 MB 1440x1440 00490-3826304484.png)

(2.77 MB 1440x1440 00246-874679070.png)

(2.69 MB 1440x1440 00210-712236470.png)

(2.74 MB 1440x1440 00217-2261295533.png)

>>23480 Yeah I realized when looking at the .yaml files for easyfluff and that vpred hll5 model that was released long ago, all they had was that parameterization "V" line at the top section, also yes the model you had was not a vpred model given some mixing bs that I managed to solve with the version I'm about to upload and link here, I'll upload it on other places once I test all the artists the original anon extracted from the locon and see if it works the same.
Here's the proper v-pred Based68 model, I was thinking about having hll5 vpred be the vpred model for mixing but I thought it would be overkill given that the HLL6 lycoris had much more of a better combo with Easyfluff, Tested out some of the artists that were extracted from the locon and yes the tags do work so make sure you also put the hll6.csv tagging file into the auto completion extension folder/tags and just switch to it in your automatic111 settings to test out the artists. https://gofile.io/d/8eN2VW
>>23485 damn that's nice
>>23487 there might be a v2 because the model doesn't heavily copy character LORA's art style which may be a good thing for some people but I'll find a way to adjust that.
>>23489 Is there anything you can do about the tumblr noses?
>>23488 Thank you! I'm gad I gave the easyfluff+hll6.3 combo a try. Might post some more gens later, but also don't want to spam too much samey stuff. Have any anons over on 4chan been posting hll6.3 gens? Wondering if I should check there for prompt inspiration
>>23487 I'm new to these types of models. Do I need to remove other yami files from the folder, or can I just leave them in there?
>>23491 seems like a few but i'm not practiced at spotting them
>>23492 you can just leave it in the models folder >>23490 I'll see but that might be something easyfluff left behind when hll6 did its best to unfurry the basic gens without furry tags.
>>23493 Nvm there is some much negative bs to filter through there it's not worth the time. >"oh I'm not a shill, I just tried it and it impressed me" word for word multiple times per thread oh fuck it's a me
>>23494 Yeah, I figured. You can somewhat combat it with artist tags, but there are still a quite a few gens that keep them. Apart from the noses, it's really solid. A noticable improvement over b67, and artist tags are a blessing.
>>23486 >>23487 Nice one, looks like it could do just as fine as fluff model with some old loras compatibility saved
(304.81 KB 512x512 catbox_kqq3hl.png)

>>23487 All my images are coming out absolutely fried, even with no loras and default vae. Any idea where to troubleshoot? I don't have any issues with easyfluff + hll
>>23498 if you use hll and easyfluff then you most likely already use cfg rescale and you kept the yaml I provided in the model folder along with B68, not sure what I can really provide for solving your issue.
It seems new version of hll6 is out https://huggingface co/CluelessC/hll-test/blob/main/lyco/hll6.3-fluff-a5a.safetensors
>>23461 >>23467 Assuming you're referring to >>20877, I tried that now on the latest dev commit, and it actually works properly now. Note I'm using "--disable-model-loading-ram-optimization" and I'm not caching any models in RAM. If someone wants to fuck with those settings to see what breaks it feel free, then maybe I'll look more into it.
>>23495 >he outed himself Called it.
>>23503 xformers are non-deterministic. See if there's any difference without them.
(1.29 MB 1024x1536 00011-1445357354.png)

(1.48 MB 1024x1536 00026-4202647816.png)

(1.78 MB 1024x1536 00181-3385593215.png)

(1.80 MB 1024x1536 00034-34032552.png)

https://mega.nz/folder/ctl1FYoK#BYlpywutnH4psbEcxsbLLg added jgeorgedrawz might have to start learning how to train on furshit models at this rate lol >xformers are non-deterministic unless you're using an old version no
(1.43 MB 1024x1536 01415-1616998855.png)

(1.51 MB 1024x1536 00216-1429190230.png)

(1.55 MB 1024x1536 01007-1123010386.png)

(1.77 MB 1024x1536 00047-542585859.png)

The only thing I don't love about v3 is that I still can't do the really obscure stuff or blocked artists that I like. But damn, it does popular characters/artists/tags really fucking well.
>>23510 the only issue I have with it right now is how it looks like crap by itself without using artist tags and doesn't fully mimic the art style of a character LORA. Otherwise it's good the tags still work but trying to keep the artist knowledge while trying to get the model to look good by itself and mimic character lora's a lot better without artist tags is taking some time. Luckily it seems like the original anon that started this whole fluff hll6 combo realized this too so I'm keeping an eye on what they're doing to see if there's any way to make the model better for both worlds.
>>23511 >the only issue I have with it right now is how it looks like crap by itself without using artist tags It's better than V1 for sure but consider the fact that local models had their default style tainted by AOM2 or worse (basilmix directly) >doesn't fully mimic the art style of a character LoRA What do you mean? Character LoRAs (ideally) shouldn't have styles built-in >but trying to keep the artist knowledge while trying to get the model to look good by itself and mimic character lora's a lot better without artist tags is taking some time ??? I tried reading that multiple times and it doesn't make sense.
>>23512 Basically the model doesn't have a built in style, so without artist tags or artist LORAs it just looks bad, but knowing NAI V3 needs artist tags to look a lot better too maybe it's fine as the way it is.
>>23513 It does have a built-in style just like V1, it's just that V1's quality tags and negs gave it tunnel vision.
>>23507 Not anymore. Also no.
>>23487 Anyway, mind sharing the receipe and merging technique for that? Any plans on attempt to remerge this using newer hll >>23500 ? Looks like it has alot more artists in in btw when comparing provided txt files for them
>>23465 I was hoping it would let me specify a second denoise value for unmasked so I could upscale without giving the body too much detail, but this also looks cool.
(2.24 MB 1200x1800 catbox_fyavqp.png)

(2.45 MB 1200x1800 catbox_glvslw.png)

(2.49 MB 1200x1800 catbox_ay1v0m.png)

>>23516 It would definitely benefit from the newer hll but at this point I don't really see a point of switching from EF+HLL B68-v3 vs EF11.2+HLL6.3 a4 vs EF11.2+HLL6.3 a5a
(2.51 MB 1280x1920 catbox_27wcgc.png)

>>23518 blooper
>>23516 Where can you find the new artist list?
>>23518 Well i noticed that it has not so much of a differences, but ain't it a better idea to have something anime related as a fundament of the model, rather than fluff? >>23520 Gonna reupload it, since the guy who trained it uploaded to litter and it will die eventually, csv list with tags - https://files.catbox.moe/e6jc6i.csv , artist list with cosplayers included, but not everyone commented, so be careful with putting that to a wildcard - https://files.catbox.moe/sizp9i.txt
>>23525 based
>>23525 Bridget without a dick is something I didn't knew I would ever need until now. This is 1000 times superior. Magnificent way to improve massively a trash character
https://rentry.co/gitgudgayshit gitgud is dead so i reformatted the latest snapshot from wayback (late sept 2023) for rentry. won't update it but here's the raw if the original anon or someone else wants to continue from there
yes sir she's hardly used at all, only fucked every other sunday, lowest mileage milf you'll see
>>23528 Thank you, there's a lot of good stuff there. >>23529 Very cute >>23530 >mother and daughter that's hot I'm having a lot of fun fixing these naiv3 gens on local.
(3.04 MB 1440x1440 00053-2177424668.png)

(3.03 MB 1440x1440 00078-309852813.png)

(3.14 MB 1440x1440 00103-1611869809.png)

(2.90 MB 1440x1440 00015-946817591.png)

>>23531 Nice and festive Last one of this prompt I'll post for a while: Four seasons variant.
>>23532 Oh no, they are gonna get gobbed....
>A matching Triton is not available, some optimizations will not be enabled. >Error caught was: No module named 'triton' is there a way to stop kohya sd from checking for triton? it's annoying
(1.78 MB 1024x1280 catbox_kkydul.png)

>>23528 aaaand it's gone
>>23536 well fuck
>>23536 >>23537 didn't realize renty was so pozzed. well here's the txt of the raw if anyone needs it. https://huggingface.co/datasets/lazylora/gitgud-gayshit-raw/blob/main/gayshitbackup.txt
(1.02 MB 832x1216 catbox_n0s83s.png)

(3.06 MB 1920x1280 catbox_plvc7n.png)

>>23539 Cute nene
>>23528 what happened?
>>23543 its pozzed apparently, might have a schizo or some self righteous glowie claiming copyright/safety violations any time it is brought up now
>>23546 Holy fuck I love that style
>>23544 I'm pretty sure it's due to cunny that was posted on gayshit
>>23548 he's my favorite 3d artist without a doubt i hope he dumps his private stash someday because it's gotta be huge
(1.17 MB 1024x1536 00066-2811089389.png)

(1.43 MB 1024x1536 00042-2957369267.png)

(1.39 MB 1024x1536 00118-1893055134.png)

(1.40 MB 1024x1536 00114-3816411864.png)

(1.33 MB 1440x960 00056-2117079457.png)

(1.36 MB 1440x960 00067-4253215875.png)

(1.35 MB 1440x960 00070-4253215874.png)

>>23553 Digging this style. Is that on local?
(1.35 MB 1024x1536 00008-1458008777.png)

(1.30 MB 1024x1536 00046-1094657915.png)

(1.37 MB 1024x1536 00048-1094657917.png)

(1.50 MB 1024x1536 00026-327372447.png)

>>23554 yep. messing around with furshit models and the hll lora https://files.catbox.moe/c1slsp.png https://files.catbox.moe/mjwju3.png
>>23555 Very nice, thanks for sharing
>>23544 There's about a dozen forks of gayshit on gitgud, one of them is recent enough to have the cunny banner image and (maybe because of that) is inaccessible to anyone not logged in to gitgud https://gitgud.io/explore/projects?sort=latest_activity_desc&name=AI%20porn%20guide&sort=latest_activity_desc
>>23557 Such is the life of a prompter.
(1.63 MB 1200x1536 00008-753.png)

(1.76 MB 1224x1568 00014-743.png)

I'm in heaven
Unexpectedly went on vacation and had no internet for 3 weeks Did I miss anything?
>>23564 If you already know about NAI V3 then no
>>23565 yea, and I assume no leaks so no one gives a shit
>>23566 yeah no one except for everyone
>>23567 >everyone paypigs aren't people
hang yourself freeturd
>>23571 i was semi-seriously thinking about trying to set up a tournament to find the single most dominant art style that nai3 knows with like 60% certainty it was going to be j7w
(1.13 MB 832x1216 catbox_4kglv2.png)

(1.43 MB 832x1216 catbox_d6jvzg.png)

(1.02 MB 832x1216 catbox_tx536o.png)

>>23572 Kakure Eria is definitely up there, maybe hxxg as well and a few others I'm forgetting
>>23572 the one with most work posted, genius
>>23573 kakure eria would be a contender for sure
>>23570 >freeturd please stay in the /hdg/ containment thread and keep shitposting there, thanks
>>23576 >/hdg/ containment thread So, this one?
>>23577 no, the one we all left from in january
Not your safe space.
>>23579 never said it was. i'm telling you to fuck off with your garbage shitposts and stay with the trash
>>23563 These ones were made with Based68. BasedMixesAnon if you're reading this, please I encourage to improve the mix by using the most recent hll6 version, a5a. The model atm isn't better than EF 11.2+hll6 a5a, but I has a lot of potential. You can clearly see it in these results. Keep up with the good work! Don't give up!
>>23581 >Based68 where?
>>23580 > i'm telling you to fuck off with your garbage shitposts and stay with the trash And I'm telling you that this is not your safe space, freetranny, I am not going anywhere.
>>23582 >>23487 Probably the file expired now
>>23584 Was a test anyways, it's pretty much these two models anyways given that the anime models I put into the merge barely changed much https://huggingface.co/zatochu/EasyFluff/blob/main/EasyFluffV11.2.safetensors https://huggingface.co/CluelessC/hll-test/blob/main/lyco/hll6.3-fluff-a5a.safetensors there is a nai version and a safetensor that is non-vpred so I'm working with those for now to see if I can get something a bit more like the based models with that artist knowledge of hll6
>>23580 lmao didnt you start this? maybe don't bitch about NAI users if you don't wanna be called poor, thirdie.
>>23586 this >>23576 was my first message. neck yourself, jeet. you shills are always the one insisting gpus are a waste of money and to subscribe 25 dollars a month for shitty images
>>23583 What a badass!
>>23587 Are those shills in the room with you right now?
>>23590 is this supposed to make me upset?
>>23591 I'd prefer you to simply shut the fuck up, I don't really care if you are upset or not. You can use a furry cock to shut your mouth if you prefer that.
>>23592 not your safe space. cry about it more lol
localsissies, not like this
>>23594 Did we reach some sort of awful bad end where prompting on your own PC is bad and we have to shell out money for NAI? Where's the gay furry hackers to break into NAI and leak the model for the people?
>>23595 Nah it's not bad of course, I'm just throwing shit back at the anti-NAI schizo.
Shut up and post cunny, faggots.
(2.07 MB 1920x1280 catbox_5cw7jo.png)

The shit flinging isn't even entertaining at this point now that it's leaked into here
>>23598 just some faggot shitting up every thread he can. you can tell because he does it the exact same way in the other place and here
>>23599 very cute
Is the correct use for naiv3 artist_ or artist: cause I've seen both
>>23603 artist: artist_ is just windows not liking colons in filenames
>>23604 Ah that makes sense thank you
>>23603 >>23604 Depends what you want to achieve. artist: artistname and artistname give different results
>>23606 should be artist:artistname btw
Found that single 3/4090 volunteer yet? :^)
>>23608 Why are you doing this man? I have absolutely no love for furry shills and copers myself but why keep poking the beehive like that? Is it really that fun?
>>23609 >Is it really that fun? It wasn't fun at first, it was more of a "come on, get real" thing. Now? Let me put it this way, the harder they cope and shill the furfag shit the more fun it gets. At some point you gotta realize that it's not just wishful thinking anymore, it's a full-blown delusion.
i wish the furry model people the best of luck success is good, failure is funny
>>23610 >At some point you gotta realize that it's not just wishful thinking anymore, it's a full-blown delusion. I agree but still it would suck to see this thread drown in shit, just saying.
>>23611 What the furfag zealots don't get is that despite all the (entirely justified) poking and laughing no one wants local as a whole (which is a very important distinction to make seeing how the furfags and their followers seem to unironically think that they suddenly represent local now as if there's some sort of battle going on) to stagnate, not even NAIchads. >>23612 >I agree but still it would suck to see this thread drown in shit, just saying. If you think there's shit in this thread just check out 4cucks /hdg/, at least we don't have boomerproompters here.
NAIshills are right, furry and hll anons had to release so many versions and it's just a cope, even based anon can't make good shit anymore so it's over for us until someone leaks NAI 3
>>23614 The fuck are you even talking about? Everyone has had to make different versions. RunwayML, NAI, Midjourney, Dall-e. Are you retarded?
>>23615 I think he meant that they've released so many versions and yet they haven't really improved.
>>23616 basically, NAI only needed 3 versions and it btfo'd everyone with the SDXL concept, local had no way to catch up
>>23617 We always knew that NAI was gonna be the one to fix SDXL, they also got hardware that open source groups simply don't have access to without throwing some money. And even if NAIv3 leaked, most of the local open source individuals or groups won't be able to finetune it without something stronger than 30/4090s. Probably for the best that open source just pushes 1.5 as far as it can until a new NAI leak plus someone willing to dump some money to finetune an XL model appear.
>>23618 >some money. You're vastly underestimating just how much money is required, it's not just "some" money and wannabe ML autists are nowhere near as competent as they think they are, renting is even less feasible considering how much experimentation and how much time stuff takes. >>23617 They have dozens of unreleased models and whatnot, they only released the most successful ones. The point is that they were able to iterate and experiment at lightning-fast rates compares to local.
(1.63 MB 960x1440 00014-2244143701.png)

(1.61 MB 960x1440 00054-1552332788.png)

We (should) all want local to succeed. Keep posting hll gens fellow localbros and keep posting those perfect not-inpainted hands NAIchads. We're all in this together.
>>23619 >You're vastly underestimating just how much money is required I know exactly how much is required. The "some money" was just me being snide about how no one wants to spend, or even have the bankroll, required to operate even at the functional scale to roll out finetunes regularly to beta test, iron out bugs, grow the dataset, and be functional for release and collect feedback. Furries have been operated on inefficient but free gibs and essentially loopholing Google's demands to keep their operations going. Once it runs out, they will stagnate without paying up serious money. The only way renting is viable is if you already have a template in place that works as good as it can on high spec local hardware and then expand from it on A100/H100s. I haven't been keeping up with how furries' progress on XL has been but they never got a single working XL finetune that functioned, it was always retrofitted 1.5 + 2.0 feature models. So from the way I see things, XL finetunes by localfags will not be happening in the coming year.
>>23621 >how no one wants to spend, or even have the bankroll, required to operate even at the functional scale Look at it this way, if the furfaggots aren't willing to invest "their" (parents') nearly unlimited money into it then anime chads are even less even less likely to. Maybe it's the whole culture surrounding art and art*sts, maybe it's because after all these fields are still incredibly niche (with anime/furry porn being even more niche on top) when compared to normies wanting to generate instagram models surfing on the moon 4k artstation very detailed slop.
>>23622 furries hate us, they treat loli like CP and try having a moral grandstanding over us online now of days. There's no way that any of the furries with the funds or knowledge are like our local furfag that enjoys both cunny and furshit who is willing to help both communities
>>23622 Lodestone is a poor fucking seanigger collecting donations to stay alive, I doubt the other guys other than the legit MLfags working in the actual field have money but not enough to drop anything comfortably to push the open source space.
>>23623 >they treat loli like CP and try having a moral grandstanding over us online now of days Is that really the case? I was always way too disgusted to delve into their shit so this is news to me.
>>23625 it's mostly stuff you see in culture war shit against Anime having loli characters and shit, bunch of zoomers and furry profiles trying to act like a 90s christian parent over the subject, the furries themself might be zoomers but that comes into question when the furry admins of discord give a pass to loli/shota furry porn but nuke all anime cunny shit
>>23623 >furries hate us, they treat loli like CP and try having a moral grandstanding over us online now of days I haven't seen it but then again I don't use twitter and we defederate furfags on sight. Still, ironic given the whole cub thing. >>23626 >it's mostly stuff you see in culture war shit against Anime having loli characters and shit, bunch of zoomers and furry profiles trying to act like a 90s christian parent over the subject, the furries themself might be zoomers but that comes into question when the furry admins of discord give a pass to loli/shota furry porn but nuke all anime cunny shit That's because while trannies might be 1, maybe 2 in 10 in anime communities with furries it's more like 8/9 in 10.
>>23623 Fluffyrock (the biggest furry model) is trained on loli and shota (the entire e621 dataset, which includes both). It generates them both fine. I don't think they go around advertising it too loudly though, for obvious reasons.
>>23627 >That's because while trannies might be 1, maybe 2 in 10 in anime communities with furries it's more like 8/9 in 10. I don't know man, I feel like it's 5/10 now of days after all the gender confused zoomers the anime community got from genshin and zoomer anime like demon slayer
>>23629 >Demon Slayer >loved by Zoomers that's a weird way to type Attack on Titan.
Amazing to me that my post weeks back still lives rent-free in NAI shill's head I haven't posted here since then and I'm enjoying hll+easyfluff just fine in /vtai/ while every other threads drowns in this shitty console war
Who asked though?
>>23631 /vtai/ is comfy only because that board's shitters focus on on trolling global threads or post gossipbait threads rather than the /ai/ threads.
>>23629 >Genshin It's not really Genshin's fault per se, blame zoomers for being consolefags. Pretty sure most of them play on soystations. >feels like 5 in 10 It's definitely increasing, not denying it one bit. But it's so much worse with the furries, the whole "HRT ruined nerd communities like drugs ruined niggers in the 80s/90s" is true but the furry parallel would be the black death and mid 1300s Europe.
>>23633 The autism is too concentrated for outsiders, the vast majority only care about posting their oshi
>>23631 Not really a console war, it's more like the Dreamcast vs the PS2 here.
>>23626 >it's mostly stuff you see in culture war shit against Anime having loli characters and shit, bunch of zoomers and furry profiles trying to act like a 90s christian parent over the subject You missed the dragonball zoomers. There are SO many of them using the same exact phrases and images that I'm fucking sure it's either a botnet or some really bizarre psyop.
>>23635 Autism is one thing, but the delusional schizophrenia of the many fanbases is something that is just too much for me.
>>23637 most dragon ball fans are negros, arabs, and spics a lot of them zoomers and for some strange reason a lot of them have gained the weird self righteous autism that white troons have despite a lot of these cultures not really giving a shit about that shit when DBZ was still airing, no one gave a fuck about loli Chichi being a slut and no one gave a fuck about Pan getting her nipple sucked by a baby deer back then.
>>23639 If we're being real nobody gave a fuck about underage Bulma showing off her pussy at the start either. Point is, it's fucking weird. Same phrases, same images, thousands of accounts. I can't even call some of them irony-poisoned because it'd be a fucking compliment.
>>23640 I've noticed on YouTube that I'm getting recommended channels that post the same community post memes and sometimes short video reuploads from shittok. The accounts will appear about a week apart and post the shit that one of the others posted a couple weeks back. You may be right about a botnet.
>>23641 A bot net is possible, AI has made it easy for retarded kids to make code to run that shit and Elon did say that xitter is mostly bots compared to real users so yeah just posting cunny and getting that reaction image of SSJ4 Gogeta many times would not be surprising, the accounts themselves barely give out normal text replies
>>23642 >so yeah just posting cunny and getting that reaction image of SSJ4 Gogeta many times would not be surprising, the accounts themselves barely give out normal text replies It happens even on brand new accounts with ZERO followers, views, interactions or whatever the fuck.
Asking here as well, has anyone found any standalone front-ends for NAI other than that one japanese Android one? I know about the webui extension but gradio makes me want to hammer my balls.
I gotta ask at this point, is there any way to set default settings per model in a1111? I always forget to set the cfg rescale when I switch to the furfag model.
>>23619 >>23621 >>23608 https://naklecha.notion.site/explained-latent-consistency-models-13a9290c0fd3427d8d1a1e0bed97bde2 https://github.com/luosiallen/latent-consistency-model >The authors managed to train a Stable Diffusion based LCM model in 32 hours on A100 GPU, which is wild. According to lambda labs, it would cost only $35 to train a Stable Diffusion based LCM. Wut? I don't know what is this all about but those madlads trying to make a model from scratch or something might be interested on this new tech
>>23646 LCM stuff looks weird, maybe it's because all the examples are 3DPD but they all have this weird blurry shit autofocus look to them.
not sure if this board has an archived search function like 4chin but can someone explain to me what makes the vpred models different or what they can do?
>>23648 Can't really help you but from what I remember it might? make it so that prompt ordering doesn't matter
>>23650 Fucking great. Especially 3rd pic, damn.
>>23652 Very nice
>>23631 same, I'm back after being gone for weeks (in person interview, landed the dream job) and he's still at it I really wonder why he does it. If it was just a few days for trolling's sake then maybe, but it's been going on for so long it's either severe mental illness or paid shilling
>>23646 LCMs kinda suck ass, and they're more like spinoff models than actual training from scratch, which is why the common form for using it is lora (+ appropriate sampler) It's a bit like the ZeroDiffusion project, taking 1.5 base weights and training it to implement new shit (here zsnr+vpred) Talking about that guy, he's trying out this paper https://arxiv.org/abs/2311.00938 which proposes an alternative loss objective that basically negates the over-exposure of vpred znsr models by somehow making CFG an hyperparameter, therefore locking it to an certain value that will have no overexposure. For now they seem to use it for video models since the exposure issues of zsnr are very much visible on these kind of models. Paper made by an intern however so it's going to need some review, but big if true
>ask what is hll6-a5a-epsilon2 on /vtai/ >"Thats the EF model with Vpred removed, so you can merge it into regular NAI models. It's experimental so your results may vary but I think some anons have found success with it." Interesting...
>>23656 its actually good, I had something look like Based66 with 64 elements while still having the artist knowledge. I'm still trying to work with implementing vpred but I have been working on this for days already so if none of my merging recipes don't have similar results to the non-vpred model merging I'll just wait for a safetensor for the nai hll6 model with vpred to try it out again and just dump Based68
>>23657 oh shit looks like a better looking version of the fluff hll6 lyco came out, that's the one vpred model that doesn't give me washed out models when merging so I'll try combining the locon with easyfluff and try again. https://huggingface.co/CluelessC/hll-test/blob/main/lyco/hll6.3-fluff-a5b.safetensors
>>23657 Is it really that hard to implement vpred? Have you already tried to ask lodestone for a lil' help?
>>23658 its simple to do its just that the vpred models that react well to hll6 are way too overpowering in supermerger, that's really the thing holding me back (I tried doing basic 2 model merging with the vpred model being a low ratio but sadly the final model isn't a vpred model, you need to do the whole add/train difference method to make it work)
>>23658 So where do you put htis and how does it work in the model? Curious to try it out so wanna figure this out and easyfluff (again) as something new. Been re-training some of my concepts at 1024 just to see if it's better and it's hard to gauge.
>>23663 it only works well with easyfull 11.2 but you just plop the lyco into your prompt at the end with a weight of 0.8-1 and then you have access to a bunch of artist/vtuber knowledge https://gofile.io/d/dace6v you just have to put this tag.csv into the tags folder for the autotagcomplete extension and select it in the settings section for it for automatic111
(1.50 MB 960x1440 00066-718736616.png)

(1.12 MB 960x1440 00115-698109385.png)

(1.45 MB 960x1440 00152-3688294999.png)

(1.28 MB 960x1440 00133-4155798852.png)

Grab + made lazy meme
(82.31 KB 500x558 89bgea.jpg)

>>23666 lazy meme
>>23667 except the bottom panel is me combining eight artist tags
(1.26 MB 960x1440 00155-1932147809.png)

(1.17 MB 960x1440 00184-512946220.png)

>>23668 Fair enough lol
>>23671 >hataraki-*sluuuuuurrrpppp* *burp*-nai Nice.
>>23674 Now this is the good shit
>>23664 Better phrased I guess, where do you drop the actual file, like in the models folder? And then you have to prompt for it like an embed?
>>23678 the lyco? just into the lora model folder and then you can just put it in the prompt box to activate it.
>>23679 Will it work on anything or just easyfluff.
>>23681 it only looks good on Easyfluff, hll anon's hugging face has a lyco that work well with NAI model but it's just not as good
(478.21 KB 696x496 00032-2711403491.png)

(446.67 KB 696x496 00044-3486977412.png)

(463.13 KB 696x496 00061-3133562631.png)

(469.87 KB 696x496 00045-3486977413.png)

can someone help me find the nai3 model?, i downloaded a model from https://gitgud.io/gayshit/makesomefuckingporn, but i dont know if its the latest version
>>23683 NAI3 hasn't been leaked
time for a new bread or too early
>>23685 last new bread was 1095 so it wouldn't be too early.
>>23634 this is some real subculture shit. what's this about HRT ruining nerd communities?
>>23687 >what's this about HRT ruining nerd communities? You haven't noticed all the trannies in the gaming/anime/computer space since ~2016?
make bread


Forms
Delete
Report
Quick Reply