/abdl/ - Adult Baby - Diaper Lover

For Lovers of Diapers and Ageplay!

Index Catalog Archive Bottom Refresh
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

Computer generated art thread Baby 08/26/2022 (Fri) 02:47:28 No. 16220
Anyone with a powerful PC and enough knowledge on tech to run this AI and create some computer generated ABDL porn?
(268.60 KB 512x512 ai.png)

I tried it out, it sorta worked but not well for ABDL. It usually does thin depends-like pull-ups instead of diapers (or just adds a baby in). One of my best results is attached. I want to try Textual Inversion to get around that problem but haven't gotten it working yet. Craiyon seems much better at doing thick diapers, see http://4usoivrpy52lmc4mgn2h34cmfiltslesthr56yttv2pxudd3dapqciyd.onion/abdl/res/13468.html
>>16254 I wonder if feeding it the whole abdreams.com catalog, as well as other diaper content, it would make the quality/accuracy get better.
(346.03 KB 512x512 ai2.png)

(323.18 KB 512x512 ai.png)

>>16268 This is interesting- Craiyon doesn't know what ABDL means, but Stable Diffusion does. If you prompt with "ABDL" you get like half baby pics but Craiyon just does random stuff. Prompts with "ABDL diaper" instead of "Adult diaper" are much closer though still not perfect. Also because I was an idiot, a better link to the Craiyon thread >>13468
>>16298 That first image is pretty good. Even though it's not photorealist yet. I imagine technology getting better and more accurate in the years to come.
(365.56 KB 512x512 ai.png)

>>16304 I'm specifically going for a painted style in the previous images because IMO it looks better. See this one, where there's lots of little things that just look wrong instead of it being covered by the art style.
>>16311 >instead of it being covered by the art style. Yeah, I think that is the case with these algorithms in general. Like, if you are going for photorealism, any detail out of place will really stand out an you will notice. Now if you are going for a painting style, there is more room for random inaccuracies, and those algorithms make a lot of those, to be masked by the art style and to not be that noticeable.
>>16311 You know, this is still not great but it's worlds beyond where AI wank material was just a year ago. That actually kind of passes the "squint test" which is pretty impressive. I'm willing to bet that a year from now we have something actually decent
>>16332 I started following a bot on twitter a 2 or 3 months ago where people tweet requests at it and it posts images generated from them. It looked worse than these especially human form and faces. Today it looks damn good most of the time, even with crazy prompts. IMO the best ones are submissions in a certain style or crossovers. Whoever is making these diaper ai should consider feeding it tons of anime and draw styles so you can prompt "[insert name] wearing a diaper under her school uniform at school anime" or stuff like that, I think results would be surprisingly good. I'll add a recent example
>>16334 That bot is run by a real person, so I don't think you can use it to generate porn. The submissions are probably viewed before uploaded.
>>16338 not suggesting that, just using it to illustrate a point for whoever is making this diaper one
>>16298 what website did you use to make these?
For anyone wanting to teste some of those new AIs, more precisely those running Stable Diffusion algorithm: it is still not as good as you running it on a dedicated PC with a powerful hardware where you can adjust all the settings and tweak around, but: https://dezgo.com/ https://stability.ai/beta-signup-form It definitely understand the concept of ABDL. I assume DALL-E 2 would also be able to put up good art creation, maybe even better than this one – if their algorithm wasn't so heavily censored and if there was ABDL content on their database (rather than mostly stockphoto imagery, I'm pretty sure they bought all istockphoto.com catalog to use on when making it)
It's really weird the whole thing of prompting, in a way, it's as if you were talking with someone who is heavily autistic, and any slightly difference between the order in which you arrange the words in your sentence... it can often differ between the machine understanding what you said – sort of – or totally missing the point 100%. Oftentimes you have to arrange words in a way that, it might even sound unnatural for humans, but it seems to make sense for the machine. I assume their models still have a way to go when dealing with natural language processing and fine-tuning the process, but this is just a matter of time. This shit is exponencial, in 5 years I can't even imagine how advanced this thing will be.
>>16341 >I assume DALL-E 2 would also be able to put up good art creation, maybe even better than this one – if their algorithm wasn't so heavily censored really grates me that "open ai" licenses all their shit, puts it behind a paywall, and puts huge artificial limits on what it can do like even google has the chromium project
(317.25 KB 512x512 ai2.png)

(347.75 KB 512x512 ai.png)

Need to get lucky and have the right prompt but Stable Diffusion can definitely make some good looking artwork. Not professional level, but close. >>16340 Making them on my own machine using the low-VRAM Optimized Stable Diffusion. Takes ~4 minutes to generate a set of 16 images on an RTX 3070.
>>16379 >Takes ~4 minutes to generate a set of 16 images Interesting. Can you fine tune the experience, like, for instance, asking for the program to generate a large sample size although in the smaller resolution and then choosing 2 or 3 that look promising and asking the algorithm to work on those and increase their resolution? It seems that this would be a good way for getting rid of dead ends prompts.
Some of it is uncanny and freaky.
(329.45 KB 512x512 elf2.png)

(303.46 KB 512x512 elf1.png)

>>16385 Some of that is possible but I don't have it set up right now. Often a single prompt will produce a mix of good and bad results based on the random noise that it starts with. That's why I do a batch of 16 per prompt.
>>16408 What were the prompts/settings to these two?
(319.19 KB 512x512 ai2.png)

(353.93 KB 512x512 ai.png)

>>16449 Both used the same prompt, seed 6 images 1 and 8. ddim 50 steps, 7.5 guidance scale. > ABDL diaper on high fantasy loli teen ABDL elf princess wearing disposable diaper, Trending ArtStation Pixiv high quality digital artwork by artgerm, wlop, sakimichan, Greg Rutkowsk
(366.01 KB 512x512 ad4.png)

(313.96 KB 512x512 ad6.png)

(372.36 KB 512x512 ad7.png)

(374.06 KB 512x512 ad3.png)

(431.31 KB 512x512 ad5.png)

Tried to make some fake ads, some of them turned out surprisingly well. Here's some of the better ones. Also three more painted/stylized images. I still haven't found any way to reliably get the style of diapers I want, though "bulky disposable diaper" along with "ABDL" seems to be getting closer, only around 10% of images match. Whereas Craiyon is like 95% that type of diaper, with much worse everything else.
(346.32 KB 512x512 ai1.png)

(328.67 KB 512x512 ai2.png)

(291.12 KB 512x512 ai3.png)

(390.60 KB 512x512 ad1.png)

(396.46 KB 512x512 ad2.png)

>>16630 Oh, max 5 files per post. Here's the other 2 ads and the 3 paint images.
(1.71 MB 2685x3013 Untitled-1.jpg)

Right. It is important to try to figure out your prompts and how the AI thinks about them. It can find disposable diapers of the pull-up kind, but it will want to try to change them in to pants.
For those trying to understand how the whole prompt thing works, I recommend to check on this site, that has a bunch of works created by AI, as well the prompt used on them. https://lexica.art
Look, can we all just agree that as amazing as this technology is, AI generated art is not yet producing anything worth fapping to?
>>16865 Or is AI showing us that to it, we all look like deformed mentally ill amputees?
(331.35 KB 512x512 ai.png)

(370.21 KB 512x512 ai2.png)

(303.40 KB 512x512 ai3.png)

(299.61 KB 512x512 ai4.png)

>>16865 It's getting there. Surely someone has low enough standards for the top 1% of generated images. I've found some success with doing prompt blending. Take the learned conditioning of a prompt involving diapers, multiply by 1.2, subtract the conditioning for "naked baby" times 0.2. Stuff like that. Textual Inversion should be better than this, but I still haven't gotten it working (I need to either get better hardware or rent some online)
>>16891 Indeed. There are a lot of people with a very low bar for jerkability, I have no doubt that at this point there was a lot of jerk offing going on already But, I agree that this technology has to be refined on many levels: >better ways for choosing art style/better user interface Like, for instance, it would be nice if there was some sorta menu showing a template image on different art styles, and you would choose a given one, so that the machine would follow it when creating yours. >a way to fix mistakes the machine does Sometimes the drawing is like 80% right, but there a third arm somewhere else, it would be nice (sense the algorithm can't do everything yet) that they offered a way for the user to fix this easily. Maybe you would select the wrong part of the image and tell "remove" or something. It's fascinating because this is only the first generation and it is already good, this will only get better and better, because whenever a technology comes up – even if it is not as good yet – it creates a feedback loop: people start using it, there is demand for it get better, people put more money to it, they start to pushing the limits of that given technology, it gets even better.
(195.43 KB 1500x1200 03.jpg)

(368.89 KB 512x512 1 (4).png)

(382.77 KB 512x512 1 (16).png)

(346.84 KB 512x512 1 (7).png)

(366.15 KB 512x512 1 (9).png)

It's interesting seeing the variations the algorithm comes up:
Fun fact: I was able to find the database they used to trained Stable Diffusion, it is a public database called "LAION-5B", there is like 1 billion images or so, anyway, there is a lot of images. And I was wondering if there were ABDL images in it, and if so, what images they were, and I was able ti find it: https://haveibeentrained.com/?search_text=ABDL There are very few images, most of them being ABDL adult diapers ads rather than the most artistic-ish ABDL. Which might be one of the reasons why the algorithm struggles with creating ABDL content, there a very few references. Here are some images they used.
>>16993 The problem is not that they aren't in there; they are. Search just for any diaper ABDL or medical, or even kids'; you'll find it. The problem is that what terms you need to use in the prompt to find these. I have tried my absolutely best and I can not stop it from conjuring up cloth nappies. However I can get it to sort of call up some diapers with a landing zone like ABUs. HOWEVER! It turns these in to some sort of a belt every time. Issue is that it is really difficult to conjure these things up and not have it mangle them. It took me 2 days to manage to get these, and it was mainly just adding more and more things in to the negative prompts and forcing prompt edit mid steps. >>16993
This might be interesting: I found this algorithm based on stable diffusion, but it does the reverse, it describes the image into text. Maybe doing that can help us to understand better the best tags to use when you want the algorithm to create a image similar to that given one. https://replicate.com/methexis-inc/img2prompt
>>17078 CLIP interrogators don't really help either. Been there done that. Why? Because none of the interrogators understand what "diaper" is unless it is kid's diaper as a product photo or stock photo of a child. Otherwise it is "panties" "shorts" "underwear". Although with testing of SD I can tell you that if you negative prompt "shorts" or "jeans" your chances of getting anything remotely close to a diaper go to almost none.
(313.07 KB 512x512 ai.png)

(296.14 KB 512x512 ai2.png)

(325.66 KB 512x512 ai3.png)

(361.43 KB 512x512 ai4.png)

(322.94 KB 512x512 ai5.png)

Another solution: Multi-prompt conditioning. Take multiple prompts with different guidance scales and sum their results every step. Most of these were done with a "Naked Baby" negative prompt. Normally two prompts are used (the prompt you input and the unconditioned empty string prompt). Adding more slows it down, but I've been able to get some good results (and a lot of bad ones)
AI porn will never be good
anyone train any textual inversion binaries yet willing to share? ie you can train the model a better concept of a diaper. If or if someone could just point me to a set of ADBL diaper transparent templates i probably could do the rest? the end goal would be for the model to output better detailed diapers. its a pain in the ass to wait two and half hours just to find out you fucked it up.
(706.42 KB 1024x1024 00185-727535901.png)

So about one afternoon and evening of going about with img2img, masking things and fiddling around with prompts and settings I have managed to produce this. Basically everything but the tape zone and tapes was run through the AI. The lad was generated with txt2img; although changed drastically during the workflow. Now... This seriously took many attempts, a lot trips to Photoshop and back to just correct minute issues from the output, adjusting the mask. I have to say... It was a lot of fun. More than painting by hand, since now I can work with more realistic subjects. I think it'll be a long time before we can straight up generate stuff outright. Also Stable Diffusion seems to respond best if you combine "Diaper" with "Panties" in the prompt. Figured this out by interrogating loads of pictures along with the ones made during the road that lead to this.
(602.80 KB 1650x462 teaser_static.jpg)

Interesting development on this technology: Apparently Google is testing a new algorithm called "DreamBooth" that allows the computer to repeat a given scene/characters in the exact manner among different scenes. This would be pretty useful, since nowadays these algorithm never generate "exactly" the same art style on sequence, which makes pretty hard for a user to create, for instance, a story or HQ, where it would be necessary to have continuity among the different scenes. https://youtube.com/watch?v=NnoTWZ9qgYg
So i tried out dream booth, making a cpkt, trained on a set of a variety of the front of a variety of abdl diapers not bad, probablly the next step would be to train on a set with a specific diaper.
>>17478 This is a very important development, you could also train such AI for Japanese omutsu and plastic cloth nappies.
>>17478 Please, please, please (!!!) share your cpkt!
sure here, 2 version - one is diffusers version that can only be run on a google collab "diaperdream" and model.cpkt which is my first attempt at making a .cpkt, it came out not as i'd liked, it's not the same as the diffuser's verison. prepare to be dissappointed. I'm also includeing my samples and training parameters for anyone to attempt to make their own .cpkt version of the diffusers version. anyways i dumped it here for now, probably should have went with a mega. will get to that later. tldr: im going to remake it later. https://anonfiles.com/aax3SeA3y6/aifag_better_diaper_SD_zip -- AiFag
>>17478 these images came from the diffuses version just to make things clear.
UPDATE: good news, the converter came out for the google collab version so here's the model for these images as a .cpkt, also sorry for spamming up this thread: i would appreciate if someone could merge this in with some of the other models out there and report back the results. [mega] https://mega.nz/folder/pacmSRzK#GycG1GY4KhO589tuiJwDKA
NovelAI's new anime image generator is good but trying to get it to generate ABDL is like shoving my hand through a blender. Here's a stab at AR that finally turned out somewhat decent.
>>17645 You made a babby. 😀
(516.31 KB 512x768 FbuaJWbVUAAJSDB.png)

>>17645 That one actually looks decent.
>>17478 It looks good, especially the second photo. You can clearly see the diaper, the only issue – if I may – is that, it still looks like she is wearing panties under the diaper, especially on number 2, like you can see the blue spots of her panties. Number 3 at least it remained sorta consistent, the same color, but you can see the moment where the diaper sorta morphs into a regular par of panties more tight to her body. Again, thank you so much for your work.
>>17645 Think I'm finally starting to get the hang of it. Turns out the curly braces their model uses to strengthen the input help out a lot. Getting some much nicer looking results now.
>>17645 fuck, that is amazing is that open source or you need to pay for it?
>>17804 Unfortunately, it's paid (I was already subbed to them for their AI text gen when they rolled it out). Also struggles with more complicated scenes and I haven't been able to get it to handle wetting or messing, but at the end of the day, if all you're after is some anime girls in diapers, it's pretty damn solid.
>>17810 Apparently their NOVEL AI models leaked online a couple days ago. magnet:?xt=urn:btih:5bde442da86265b670a3e5ea3163afad2c6f8ecc&dn=novelaileak Don't trust me and install by your own risk, but people were talking about it and seems legit.
>>17825 impressive
>>17825 Based. I'll probably continue subbing anyway since I do like the devs (and my hardware is shit) but here's hoping some braver anons than I get some good stuff out of it.
Does anyone know if someone uploaded the leaked NovelAI stuff on a sharehoster or somewhere without torrent?
I just started to play with stable diffusion and i like it for art and photo stuff but its severely lacking in diapers. I see the weebs solved there problems with training a model on anime with waifu diffusion, there's even a model specific to furries. like WTF we can't be beaten by furries. Ive only got a RTX3090 and i have no clue how to train a model but im willing to learn and encourage the rest of the community to also start getting images data together for it. ABDiffusionL can be a reality if we work together. similarly i see people trying to use AI dungeon and similar for diaper stories when logically it would be better to train a model for local usage with KoboldAI. anyway if you can point me in the right direction i think the diffusion model is a good starting point.
>>17917 >Ive only got a RTX3090 Dude that's not an excuse. I have a RTX3060 and got dreambooth working with the help of a youtube tutorial. just google dreambooth and train your own diaper model!
I'm working on a dreambooth model based on favorite pixiv artists. Hope it'll go well
>>19266 He does give little tech updates with new posts, which is nice. e.g., for pic 1... "Further work on making a custom model. Same work flow as always: generate base image on text2img; adjust on photoshop, iterate on img2img and photoshop. Two issues to that I keep encountering is that adjusting the output to generate and keep a shirt on seems to be either "Either it has it, or it doesn't". Still struggling to reliably to get faces that are not "Square jaw male underwear model" or "Adult toddler" Like in photobash #1. "Custom trained SD1.5 model on 40 hand cropped, upscaled and adjusted images found from search engines. Automatic1111 repo. All relevant refrences and citations to the technology can be found from the Github pages of the relevant projects. I highly recommend everyone to go read up on the original publications and cite them in further development"
>>19313 Damn, those are good. It is amazing how fast this technology is evolving, thanks for the link.
>>19436 Now if only it could do messy butts
>>19720 >>19721 >AI still can't do diaper's back lmao
>>17917 I had absolutely no idea what I was doing or getting into like 4 days ago. I started with this: https://www.youtube.com/watch?v=1mEggRgRgfg And now I've trained a hypernetwork for novelAI that can do some okay diapers (if you follow the video, what you're doing is training a hypernetwork. I learned that literally as I did it). Despite having no idea what I was doing, it still took way longer to prepare all the images and tag them than everything else, and it took me like a dozen or two hours, not hundreds. And btw, I have a 3070. So a 3090 is no excuse at all. In all, I trained it on about 218 images, and it still struggles with a lot of things. Often the diapers look more like panties, and only rarely are they truly tapey-diapers, most of the time they're pull-ups even when that's not specified, and even though I had a tag to specify that in the training set. I may go back and re-train without pull-up images having the "diaper" tag...
>>19862 same person again - I was gonna ask for advice on samplers and stuff for these anime-type images. I noticed that high CFGs tend to make better images but also make them look a lot more burned/overexposed and oversaturated. I also noticed that when using the DPM Fast sampler, if I set it to 150 steps then interrupt it at 84 steps, I get very nice and clean images. Whereas if I just set it to 84 - or anything, really, they look a bit more burned out and less smooth and clean and anime-like. What's up with that? I like things looking less jpeg-y and cleaner and smoother. I haven't turned on "enabler quantization on k samplers" yet, there's still an old seed I want to poke around with a bit more, but once I do I look forward to seeing if that'll sharpen things. Also, some guide(s) said to set "stop at last layers of CLIP model" to 1 when training. Why is that? Also, I trained on DDIM and default to it in general. Still trying to decide what the best sampler is. Any tips there, for anime-style images? Also, have bucket-loli. She's easy to carry around. Treat her well.
>>19863 Nice anon! Despite the quirks, definitely looks like an enormous improvement over what vanilla NovelAI produces. My (unfortunately very limited) understanding is that there's a definite sweetspot for CFG. A low value means the image tends to follow the prompt very little and have more deviation (not always in a positive way) whereas a high value tends to follow pieces of a prompt a bit too closely. Specifying a purple shirt, for instance, might cause purple to bleed into other parts of the image as well. Likewise, results tend to be a lot more similar. As for samplers, those get into pretty gnarly territory since my understanding is that they're basically methods of seeding the noise you use for generation, since stable diffusion tries to start from noise, then work backward to an actual image. I'd expect that for some samplers, the number of steps may factor into this - if you specify a high number of steps, you begin from a much more "noisy" seed since you have more steps to go from noise to image and so you often end up with smoother results. For a smaller number of steps, this initial seed may be a little less noisy initially, which converges faster but may result in more blurry looking images. The comments in this twitter post are a good read: https://twitter.com/iScienceLuvr/status/1564847717066559488?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1564847717066559488%7Ctwgr%5E984eb22e760b6672815e026a4f84e75655268642%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwandb.ai%2Fagatamlyn%2Fbasic-intro%2Freports%2FStable-Diffusion-and-the-Samplers-Mystery--VmlldzoyNTc4MDky I believe NovelAI used the so-called "Clip Skip" when they trained their model whereas normal SD didn't, which itself was actually inspired by another paper. They discuss it in more detail here: https://blog.novelai.net/novelai-improvements-on-stable-diffusion-e10d38db82ac Gist of it seems to be that getting the premature outputs tends to give better results than the final outputs themselves, since the final layers usually shove image information into a much smaller set of outputs which can lead to loss of information (think image compression). By skipping that preparation step, the model just has more to work with. As for samplers, k_lms, Euler and Euler-Ancestral seem to be the ones I see recommended the most often on the NAI discord, so any of those would likely be a safe bet.
>>19868 Ah, I've already seen that thread on samplers, actually. But I haven't seen that thing about "Clip Skip" before. That makes it sound like I actually want the "stop at last layers of CLIP model" to 2 or even 3 when training in order to get the hypernetwork to better understand the difference between, say, "pull-diaper" and "diaper", since those are separate tags. I definitely got that there's a sweet spot for CFG. But the two things I'd like to know more about are 1. why high CFGs tend to over-expose/"burn out" the image, and if there's anything I can do about that other than turning the CFG down. There's a lot of seeds - most, in fact, seem to do best with a CFG above 14, but that results in them being kinda burned out. See attached images as an example. You can see it resolves fantastically better and is way more coherent with less random broken bits at the higher, CFG, but the lower cfg one has way better style and colors, and the higher cfg looks burned out. 2. how to consistently recreate that smoother look that I get when interrupting DPM fast at 84 steps with a target of 120. I imagine if I try other numbers I might find other combinations that work, but that looked particularly nice. I mean, I guess I can do it consistently, but it's annoying to have to time something every time. It's a lot more work to generate images when I have to babysit them like that. But surely somewhere there's some more consistent way to get the images to have less intense colors? Something in the prompt?
>>19879 >What I'm currently using for "normalization" at high CFG scales is this code, inspired by the "dynamic thresholding" method but without any clamping. That's a nice bit of code I'd like to try that he links there, I just have no idea where to put it. >>19942 I'd really like to know how they got that diaper model. That does the diapers incredibly well. Fwiw, my model actually stabilized and I wasn't able to continue training anymore at around step 32,500. Used 218 images to train on. Tagged them all by hand with my own system for diapers. Having "front diaper", "back diaper", "side diaper," "top diaper" and "Bottom diaper" all seemed to help a bit. Still need to get around to trying training a new one where instead of having both "diaper" and "pull-up diaper", images with pull-ups only have "pull-up diaper". Maybe that'll help the diapers to not be pull-ups as much. But the real issue is how panty-like they often end up being. Putting "panties" in the negative prompt helped a little bit but also made the images often a bit less coherent. I wonder if all the various diaper-related tags aren't confusing it, though. "diaper", "pull-up diaper", ["front diaper", "side diaper", "back diaper", "bottom diaper", "top diaper"], "clean diaper", "damp diaper", "wet diaper", "leakguard stripes", "wetness indicator", "pampers", "colored wings", "big diaper". Next time around I think I'll do without "clean diaper", and just be sure to use "damp diaper" or "wet diaper" when they apply. Not sure how I feel about the viewpoint tags. They seem to have helped quite a bit, though. Elsewhere I saw people complaining that AIs can't do diapers from behind, but mine... kinda can. Also I probably need to get the cropped images out of the training pool. Maybe I'll go through the loli thread for the uncropped versions of stuff by 初夢 and 午後 from Pixiv. I have no idea what those names are in not-kanji, lol. Why do you think this other model does diapers so much better, though? Is the artist using inpaint and doing another pass to make the diapers even more diaper-y? Or maybe I need to be able to train my model more without it stabilizing?
>>19953 They did mention that people would need a manual to use it, whatever that means. Could be that it's more of a pipeline in that there's inpainting and some additional edits involved. But I'll stop putting words in their mouth. As for the tags and potential improvements, if in doubt, simpler is better. The less tags you have to work with, the more likely the model will be able to recognize what individual tags mean. Nice perk of training hypernets over a full model tune is that the original model should be mostly in-tact. It might be worth experimenting and seeing if any combination of the above tags and vanilla tags make others redundant. Another question is how well-represented is each tag in your training set? If there are any tags that are barely present, it's unlikely the model will perform well with those tags. If the software or scripts you're using for training have a way to assign loss weights to samples during training, you can always mess around with those. There's also data augmentation if you're not doing that already. That is, given your 218 images, do something like flipping all your images horizontally, and suddenly you go from 218 to 436 images to work with. It's not quite the same as hand curating 218 new images, obviously, but it can still add enough variety to be useful, and in general more data to work with is always better. I believe in you anon!
>>19978 The effort to create a functional abdl model is greatly appreciated, I just have one question, could this model allow me to merge the merunya and pieceofsoap drawing style?
>>19978 Thank you anon. I know you don't want to divulge too much, but your advice is really appreciated. To add to that, my previous post here >>19953 wasn't entirely accurate. I stated that hypernets leave the original model in-tact, but it would have been more accurate to say that they don't train on the original model. Rather, hypernets serve to modify the weights of the original model at runtime, so you can still get behavior the original model didn't exemplify, it's just significantly cheaper. Even then though, hypernets will be secondary to a full model tune, and if in doubt and you have the necessary compute, it absolutely doesn't hurt to experiment with that either. This thread: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2284 has some good advice for hypernet tuning and good hyperparameters to try, though I'm less familiar with recommended hyperparameters for a full NAI model tune. Also, if you'd like to further experiment with parameters, I'd fully recommend Weights and Biases: https://wandb.ai/site . It has a really nice suite for hyperparameter optimization and is great for result visualization. You do have to be a bit comfortable with Python to get started, but if you are, it's an amazing tool that can save a significant amount on time and organization. If folks decide to crowdsource images and tags for an ABDL model, I'd be 100% behind that and down to help.
>>19988 Whoops - meant to reference >>19967 .
(119.70 KB 900x654 nozumi.jpeg)

>>19992 the merunyaa one is incredible. I think nozumi would be a great style to train on https://www.pixiv.net/en/users/32169882/illustrations
>>19967 >inpainting and edits One thing I've noticed is the AI does way better when given a simple sketch, even, to build off of. I'd noticed that when doing some vanilla fantasy stuff earlier when I first got SD, haven't really thought of it again since then. But it might be interesting to draw something with my crappy skills then use img2img, or even just look up really lackluster diaper pics and see how the AI handles them in img2img. Most of the tags are pretty common. I made sure every image with a diaper had 1-3 tags on the diaper's angle, "front diaper", "side diaper", "bottom diaper", etc, and either "clean diaper", "damp diaper" or "wet diaper". Yeah, "auto-flip images" was an option I used. There were 872 files in the reference folder. Half of those were text files for the tags on each image, and half of the remaining 436 images were just flipped versions of the original 218 (and of course the flipped images had the exact same tags as the non-flipped versions). >I believe in you anon! Thanks! Truth be told I think I'm beginning to burn out a little on this, though. Or at least, I'm recognizing that staying up all night tagging images just isn't tenable long-term, lol. But when you don't stay up all night doing something it's a lot harder to find time for it. Fortunately I only got 67 images I'm gonna add and tag to the batch. Gonna need to go through those first 218 and remove the "diaper" tag from all the pull-up images, though. But all-in-all it shouldn't take like the 6+ hours it took to tag the first 218. It took a long time because I was very careful with the tags, and had to do a lot of manual cropping. One issue I've noticed a lot is one of the main ways, if not the main way that image generations end up being failures is actually it generating weird stuff at their stomach/chest. I wonder if "flat chest" will help with this issue? Or if there's anything I can do in training to help it. (first three attached images are examples of this) >>19978 Many thanks for all the advice and such! >that I spent the last 29 days on 10hrs/d >background knowledge on AI stuff That... explains a lot. Your model is fantastic and shows that a lot more is possible than I would otherwise think is possible with current tech. >I'd be very surprised It's a bit rare, but be very surprised because this does actually produce decent-ish results sometimes. See my last two attached images. Not perfect by any means, but if I spent more time in inpaint I could probably fix them up nicely. But, your advice does tell me a lot. Maybe I'll try some other keyword for diapers in training. This might be why my diapers look so panty-like, if the base model's getting confused with the original tag that was applied to many images where the diapers aren't really visible or drawn as we think of them... The other thing it tells me is I have a lot to learn about AI, lol. That's part of why I'm doing this, actually. I'll be studying physics again soon and in general I'd like to learn more about AI and I learn best by doing/practicing. Also, don't know if this will horrify you or what, but this hypernetwork actually stabilized in training - it hit step 32,500 and said training could not continue. >I hope that helped. It does, a lot, thanks! >>19988 I might give hypernetworks another try or two. I'm encouraged by how close a lot of these results are getting, and it's something I'm a bit more familiar with. But after that I'll look into model fine-tuning. Not familiar with python, except for a single lab class where I used it at uni. But I'm very familiar with C# in a Unity environment and c++. So knowing some languages should make it a lot easier.
>>19988 Have you considered ripping tags from a site using Hyrdus? That would probably really speed the process up. I haven't used the more advanced options before, but I'm sure you could figure out a way to even share tags/galleries here for a decentralized method of tagging a bunch of images if you asked around the hydrus general over at >>>/t/9763
I used the AI of diferent dimenssion me and edited the result to fit the nappy.
>>20749 Thanks for sharing this AI art. It's fascinating how far it's come so far.
(434.21 KB 512x768 e(1).png)

>>20749 Nice, it looks much better then my original
https://www.deviantart.com/thetripledl/ can someone figure out how they do this stuff? all they say is "custom db model" but if I train something on DB I get shit that doesn't work? they apparently even use the new model now which is supposed to be bad for this stuff because the developers censored all lewd stuff and styles.
Slightly related chat-gpt has been kind of fun Here is a sample daily guide for a week-long ABDL role play scenario based on different stages of potty training: Day 1: As a baby, you would not have any potty training and would wear diapers at all times. You may have multiple accidents throughout the day, as is normal for a baby. Day 2: On this day, you would simulate the start of potty training. You might wear training pants or pull-ups during the day, but switch to diapers at night. You may still have some accidents during the day, but you would also be encouraged to try using the toilet or potty chair whenever you feel the urge to go. Day 3: On this day, you would be close to the end of potty training and would wear training underwear during the day. You may still need to wear pull-ups or training pants at night to help prevent accidents. You should aim to have fewer accidents during the day as you continue to practice using the toilet or potty chair. Day 4: On this day, you would be considered fully potty trained and would wear regular underwear during the day. You may still need to wear pull-ups or training pants at night for a while, until you are able to stay dry through the night. Day 5-7: You would continue to wear regular underwear during the day and may or may not need to wear pull-ups or training pants at night, depending on your level of continence. You should aim to have fewer and fewer accidents as you continue to practice good bathroom habits. It's important to note that this is just a sample guide and that everyone's potty training experience is different. The key is to find a schedule that works for you and to make sure that you are comfortable and having fun during the role play.
Some stuff I've generated with Hentai Diffusion. It's not perfect for diapers but generally good for other sorts of hentai
>>20848 Don't deserve the AI to understand the design of the hands, besides I hadn't noticed that detail to edit it.
>>16220 Have you tried stable difussion 2.1? maybe with that version you will understand much better how to draw a nappy once it is trained?
>>17254 That's incredible. Do you have any more pictures with Pampers?
>>20931 That last photo is just adorable. Thank for posting. It's like an alternate reality where women aren't potty trained until they were 20 years old or so.
>>19978 Pretty knowledgeable post.
Is anyone here experiencing with generating photorealistic AI content? Like, I don't know, training an algorithm on all the thousands of abdreams.com photos catalog or so. There has been some cool models I saw popping out online, like Analog Diffusion model. It would be cool to have something like this, but with girls wearing diapers.
>>19978 NAI does diapers just fine, but (un)fortunately only with the furry model. Which is great for furs like me. I have a feeling you can still invoke some diapered humans with the furry model though, I'll give it a shot in a bit.
>>21050 Just did a quick test, 8 out of 10 results were humans(albeit some had animal-ish noses)
>>20844 I'm impressed with how far this tech has gone, especially comparing with the first few posts on this thread. A shame I couldn't get the diaper to look more poofy, even with using the tag ((puffy diaper))
has anyone tried with this AI is somewhat rudimentary modifies the original image to something you want to add I do not know if they can be of help https://colab.research.google.com/drive/15MQTRMqc7yn626PiC3KPb-DvVx_FrBAr
>>21067 Yeah, you're not going to get shit with Looking Glass. It generates low quality variations of images and that's about it.
>>21051 Is this with the "animefinal-full-pruned" model from the NAI leak and the "furry" module loaded as a hypernetwork (There's also furry_2, furry_3 and some others)? Or is this a newer version of NAI? What's the negative prompt? CFG? CLIPS? Sampler? Number of steps? Default eta noise seed delta of 31337? I've been trying to recreate this for hours and haven't even gotten close.
>>21233 This is actual NAI(not the leak), Furry Beta 1.3. Had my usual negatives for when I'm doing furry porn (extra hands, ugly, watermarks, etc). 50 steps, 11 scale, k_euler. You won't get anywhere close with the leak unfortunately, they've made leaps and bounds since their last update.
>>21238 I thought their current stuff was like, "pay for access to our server where you can send in requests to our machines" so basically your prompts and outputs would be monitored so you couldn't do stuff like this. Does it actually allow you to get a model on your machine?
Textual inversion is powerful. I've uploaded two versions of an ABDL textual inversion that I trained to: https://mega.nz/file/AIoBCDra#N2GRsWu3WBYBi0nCbVPhO8zp4HjRHw8rb41xbf6NpoE Both were trained on waifu diffusion 1.3 but I found the embeddings work surprisingly well with the novelai leak or the redshift style model. I highly recommend putting something like the bad-artist and bad-prompt (v2) embeddings in the negative prompt space (possibly with 0.6 multiplier) to compensate for the abdl embeddings causing bad artifacts. abdl-girl-wd16 is the best, often you'll want to put it at both the start and end of your prompt. -wd16b was trained with much higher learning rate and causes much worse artifacts because of it, but putting it at the end of a long description seems to help make sure a diaper is on if -wd16 isn't enough. The attached images used these embeddings, though they were the best of the ones I generated. Next up is to try using IRL ABDL images to train an embedding, my first test showed that I didn't have enough images to train it effectively.
>>21257 NTA, but NovelAI doesn't store anything image-related on their side and they delete any images you generate once you logout. They're fine with NSFW or NSFW-adjacent stuff for either image gen or text gen, even have a separate Discord server for it IIRC. They don't let you download their models though, and unless you're a furry I'm not sure the subscription is worth for image gen alone, especially post-leak where the furry model seems like the only real improvement. Might change on that stance if they improve the anime one though.
>>21064 A few of my better results. It seems you need to combine multiple tags relating to diapers + including "abdl", but they still come out looking like frilly briefs sometimes
>>21275 Oh damn, human and anime tags doing way better than I expected using their furry model. Some curation is still definitely necessary, but in a lot of cases I'd even say it's better than the anime model itself.
>>21341 Damn this is pretty impressive. Gotta say, using mittens and booties to hide the horrific hands and feet the AI produces is pretty smart.
>>21389 AI-chan can also do hands! (If you give her cookies and ask pretty please...) You can see some examples on Twitter: https://twitter.com/AzusaChansAIArt
>>21394 In your experience, how much diaper closeup photos should I include in my training dataset versus full-body shots and such? With dreambooth the model overfits and creates weird results really fast if I only train on closeups.
>>21562 I have pretty much no close-ups, I just feed whole illustrations. If you're giving it a mix of close-ups and other stuff you'll probably get a mixture of a full body and a screen diaper (which I'd call a blurry mess). Just give it the types of images you want it to generate and it'll do just that... if your parameters are in line. ~Azusa's wisdom
>>21625 surprisingly nice diaper and uniform
(509.56 KB 512x768 2335816714.png)

>>21643 >>21337 >>20844 Some of these are quite impressive, exciting to see how fast AI image generation has advanced over just the last year or so, hopefully it continues apace
It's not perfect but its pretty good and definitely better than nothing. Not sure how to fix the faces but you can tag the ones you want. https://anonfiles.com/P1OajbR8y9/diaper3_diaper3_5900_0.85-SD15NewVAEpruned_0.15-Weighted_sum-merged_ckpt Won't be doing a tutorial on how to train but no one else was sharing a model so I decided to make one and share it. It's trained off of stuff I found on kemono.
can we get some AI generated diaper hyper messing content? pretty please?
>>21935 You might want to crop the toes
>>21935 Fuck yes, I need more of this celeb stuff
I mustache you a question, anyone got a working pacifier embedding? i have trained the concept into some of my models but it's messy has hell sometimes, Thanks for the fish btw
(352.67 KB 512x512 mypacimodel.png)

that's good and all but i'm looking for something like this
>>16220 Inspired by the other thread, https://8chan.moe/abdl/res/317.html have you tried making packs of baby nappies but for adult size?
Whoever uploaded their ckpt is awesome!
>>22166 Just too bad it is struggling with the eyes big time and wants everyone to have the blue PieceofSoap eyes, and the model is very clearly trained mainly on PieceofSoap and God Hand Mar and seems a bit overfit as evident both by their style being applied to everyting and the noisy backgrounds and the eyes getting all fucked up. I think if it was retrained with "blue eyes" included in the training caption and doing a few less epochs of training it would be quite golden, because it is actually quite good as it is in many ways. Just a bit overfit and unresponsive in certain ways. At least we also have 4chan anon https://mega.nz/folder/pacmSRzK#GycG1GY4KhO589tuiJwDKA and https://civitai.com/models/4714/diaperai And mergin them together in different variations can yield great results as you combate some of the overfitting and hangups of the respective models.
>>22186 >Jack Black My pamps are in orbit! Great stuff, anon.
>>22186 Outside of Jack Black and Taylor Swift, who are the others?
>>22200 i recognize anne hattaway and scarlet johnason (no idea about the spelling for both those names an too lazy to google) absolutly no idea who the one next to black is
>>16220 Have you considered training a model, but one that does AB/DL calart style drawings?
>>22205 Wow on these just wow great artwork
>>22205 f8f83d2f23cbaeb6220ed09a622127487b3d8744c7e8ec56b50fe9d3f1822d2b is my favorite actress!!
(575.62 KB 1079x1087 Screenshot_20230128_181629.jpg)

Is it possible to train the AI to put Ocasio-Cortez in diaps?
(85.38 KB 316x415 newfag.png)

>>22223 Fucking newfag
>>22225 Hahahahaha whoops
I started my own model using about 60 pics of various women in diapers in all types of positions and these were the results after about 2500-3000 UNet training steps. You can do this for free via google colab if you have a potato PC as well. Probably going to keep training it until it starts giving me garbage.
>>16220 ¿Has anyone on this thread tried training and testing with photoshopped images like diaperedxtreme and kevin abdl?
(669.74 KB 512x800 00119-636645645645.png)

(547.45 KB 512x800 00103-63664564564564.png)

(234.51 KB 512x512 00087-132567853409.png)

(326.16 KB 512x512 00084-132567853423.png)

(260.71 KB 512x512 00053-464666666.png)

>>21661 thanks, works well
>>22351 You typically crop the images before you train them and also have a descriptor explaining what the image is so there is no need to get subpar photoshops of celebrities to scan. You are better off to just have good quality photos of people in diapers in a ton of positions.
(434.49 KB 512x776 00151-788207386.png)

(538.32 KB 512x776 00107-1442235252.png)

(410.93 KB 512x776 00092-1516267868.png)

(467.49 KB 512x776 00042-45545322.png)

(480.16 KB 512x776 00049-455453221.png)

>>22336 olso started my own model, but with very few pics so far
Started trying to get a new AI model to recognize diapers underneath clothing. It works surprisingly well with pajamas. pretty much all models I have tried before this fail at that.
>>16220 Have you tried training Protogen to generate abdl photos?
some pictures I managed to snap at this year's protest against toilets, unbelievable turn up! Model: diaperAI
(419.31 KB 512x704 00007-66595529.png)

(456.42 KB 512x704 00017-3269646586.png)

(502.96 KB 544x712 00088-3748673970.png)

>>22364 Based, thank you! >>22416 It is a porn revolution.
Has anyone come across, or have any, hyper realistic AI generated or art that has the girl(a) wearing baby diapers (Pampers/Luvs)? I’ve seen some come somewhat close but not entirely to that description.
>>22453 AI still isn't there yet. I have tried to train a model just on ABDL diapers and it still can't quite generate the pattern perfectly. This is my results for a cushie diaper model.
>>22623 I actually kinda like the designs on these, and how they're not quite what you'd expect. The AI uncanny valley style works from the pov of a baby
Played around with outpainting today and turned an image of just a naked torso wearing a diaper into a full scene.
(587.25 KB 2000x4000 00055.jpg)

(516.08 KB 512x1024 105311482_p0_master1200.jpg)

>>22671 Damn some of those are great (2nd and last especially.) Maybe there's hope for this whole AI thing after all.
>>22677 Why were you ever in doubt? Have you seen the progress made by the community in just a few months? Can you imagine this shit in 5 years when it's all gotten streamlined, is really user friendly, fast and much easier to use and sort? Right now the only two things holding us back is that the easy-to-use dreambooth extension by D8 keeps having problems so training is a bit of a pain to get into and that some of the people that have got it to work seized the opportunity to keep it for themselves so they can try to profit it off of it. Just silly temporary problems. It was obviously going to work from the moment it was released to the public in October, the image quality was nothing like today, yet it was already extremely impressive on it's own. Right now we're in the boom of creating a billion small additions through custom models, LORAs and textual inversions which all have their own respective files you have to download, sort, and keep track of their respective keywords and shit. Exhausting. At some point that will all be automated and done by the program itself.
>>22224 ....yes Or just slap a diaper onto a pic of her and then start working with the AI to deepfake it. really not hard ...but yes, if you trained an AOC dreambooth and then gave it some merges with diaper checkpoints you'd have a checkpoint that could shit out AOC in diapers. for the curious all I did for these was go for the anime filter thing snapchat uses but of course not janky as fuck all of em came from babyfae, dunno why it made her into a tanned girl, prolly shite light
>>22696 I have a doubt so that you could have the original image with the anime filter, did you extract the frame directly from the girl who uploaded the video or you uploaded some video outside your Snapchat account and you managed to place a filter and then extract the specific frames, I don't really know how snapchat works.
>>22696 My other question is, can the same technique be applied to tiktok anime filters?
>>22703 wat i was styling after it, not using it. i just used some of the abyss checkpoints and for the diaper all i did was clip it with a mask and then run a diaper checkpoint on it so it looked a bit more anime. 2-step process from original image of the girl to what you see- never had to leave stable diffusion
Luz in diapers
>>22700 Is it easy to do stuff like this if you have no prior experience with AI stuff?
>>22761 Yes. If using the CMD window sounds to scary use the automatic installer for Automatic111. https://github.com/EmpireMediaScience/A1111-Web-UI-Installer Download the model(s) you want from https://civitai.com and place them in the ...\stable-diffusion-webui\models\Stable-diffusion folder Also download (optional) https://civitai.com/models/7808/easynegative https://civitai.com/models/5224/bad-artist-negative-embedding And put those in the ...\stable-diffusion-webui\embeddings folder. These are easy textual inversions that improve image quality when prompted in the "negative" prompt. Run that sucker up, in the top left choose the downloaded model you want to generate an image with and start making. Start your prompt with "high quality" (an allrounder good quality modifier) and in the negative prompt have at least "easynegative, by_bad_artist, (worst quality, low quality, normal quality:1.2)" (the two first there are the name of the files downloaded, if you rename the files, also rename the name in the prompt, and don't include if you didn't download them of course) Then just input whatever you're trying to create in the positive prompt section. Then after some attempts you can play around with all the buttons and sliders and figure out what they do or read up on it here and there. The Unstable Diffusion discord is a resource rich place. https://top.gg/servers/1010980909568245801
(641.72 KB 512x768 00160-3735912177.png)

(529.14 KB 512x768 00089-2938521963.png)

(568.35 KB 512x768 00111-2307748536.png)

(561.05 KB 512x768 00090-3735912177.png)

It's easy enough if you can find the right settings and models
(1.05 MB 1024x1024 00537-1124424208.png)

(1.35 MB 1024x1024 00573-1401440910.png)

(1.13 MB 1024x1024 00604-1128171894.png)

(1.29 MB 1024x1024 00762-1739373647.png)

Elsa sure is looking good this winter
(624.70 KB 512x768 03610-389872883.png)

(548.61 KB 512x768 03573-389872846.png)

(489.25 KB 512x768 03606-389872879.png)

(475.64 KB 512x768 03604-389872877.png)

So now that I have an infinite diapered waifu generation machine, has anyone figured out a reliable method of getting messy images, or does a LoRA need to be cooked up for that?
(736.65 KB 512x704 Movie Poster.png)

(450.53 KB 512x704 Movie poster 2.png)

(562.31 KB 512x640 Movie poster 3.png)

Here's some movie posters I cooked up. Why does stable diffusion just shit the bed with words?
>>22918 The ultimate hope in the end is that I manage to train DiaperAI to do all without confusing them, at least mostly not confusing them since they share a lot of the same space both in tokens and visual placement for the model. For now it's just blind random chance when you pile on words like scat, dirty and pooped. Though you'll generally find everything else getting dirtier and broken way before the diaper.
Finetuned DiaperAI (by lifania) on tanukeart artwork. I think it is not bad. Mostly babyfurs, because i think 8chan does not like shota.
>>23023 These are super cute, good work!
>>16220 Anyone here could generate AB/DL images of Equestria Girls.
>>23038 thank you very much
>>16220 I'm not being asked, nor am I anyone to suggest or recommend, but I'll still mention a few suggestions for future models to train that could be added with AB/DL themed AI and thus have some extra variety 1. equestria girls and monster high 2. panty stocking 3. monster musune 4. mecha musune (Google search result and danbooru) 5. studio ghibli designs 6. 32-bit pixel art 7. liminal spaces and AB/DL rooms 8. Animatronics and fnaf robots 9. the x men of the 90s 10. some manga panels where there are nappy scenes or show nappy scenes.
>>23023 This is very cute! How did you merge the one lifania with tanukeart or furry? :3 Can you provide model link pls? <3
Wanted to see if I could edit this photo to remove watermarks and the old dudes in the pool. Too much jpg to really change anything else though.
>>23085 You would have to remove the noise from the image, I would first opt for Tencent Arc of all the online pages to remove objects and watermarks for free, this is the one that has worked best for me, as it lets you zoom in to a maximum of 500% so you can select the watermark you want to remove with greater precision https://www.vidmore.com/es/watermark-remover/#
(15.80 MB 3360x5120 Diaper-Pool.png)

>>23093 >>23085 The best I could get out of gigapixel. Not great
huggies
controlnet is awsome,
minus the body deformities and wearing the diapers backward this will look good once im done processing it.
(1.13 MB 1024x704 1.png)

(1.26 MB 1024x704 2.png)

(369.62 KB 512x512 rockinghorse.png)

>>16220 I think I found a Stable Diffusion model called ControlNet that allows you to generate a "copy" (so to speak) that allows you to generate variables from the same image, this could help AB/DL models to have less trouble generating more consistent nappies and hands here the GitHub of the https://github.com/lllyasviel/ControlNet model.
(105.53 KB 896x528 hand1.webp)

(145.34 KB 2261x1200 hand2.webp)

(100.73 KB 896x528 hand3.webp)

>>23133 NTA, but my understanding is it can work from preprocessed features from a particular image (i.e., canny edges) but it seems that in some cases it can also take some manual features (scribbles, etc.) as input. A more recent update also added something called "guidance start", which allows for correction of particular parts of the image (like SD's notorious abomination hands). Diapers are probably worth a shot too.
>>23152 Maybe the piecesofsoap and red moda drawings can help?
>>23163 Also the layout of merunya ?
>>23069 I promised Tanukeart that I would not distribute this model, unfortunately I cannot give you the link
Dumped a couple of my models on the diaper2000 mega https://mega.nz/folder/pacmSRzK#GycG1GY4KhO589tuiJwDKA/folder/9O0gyDyJ Have fun, they're not the greatest, but one of them is useful for inpainting pacifiers at least. I'm not uploading them to civai
Examples ----- ^
>>23259 Yeah the diaper models are just meh, but the paci model is half decent if you combine it with a better diaper model.
>>23027 And because why not, have three more. Generated with the semireal diaperai model. Anyone else getting weird green spots with that model/know how to get rid of them? Aside from that the model is good. By default it'll zoom in too much, but that's easy to prompt away. Fourth image is also diaperai, fifth is wd1.5 b2 with my anime-only textual inversion.
>>23271 I believe you need to update to the latest VAE model. I had them too, but updating that and making sure they were used seemed to get rid of the spots. They aren't always green either, on some of my images they were grey or blue.
>>23271 I was delighted with how Fluttershy and Sunset Shimer turned out, you can feel the essence of the character despite not being the original drawing style Twilight is excellent, the only thing I would edit out is the blurred heart in the middle of the nappy thanks
>>23280 Jesus, AI Art is getting so good so quickly... I really can't wait to see it able to handle generating comics/sequences.
>>23280 Need more like the one in the middle <3
Has anyone worked out LORA models yet? I just tried this one of belle delphine and it isn't too bad. You are able to basically paste it on any standard model which makes it super flexible.
remember, if you want this thread to die use sage so it doesnt get bumped
(405.51 KB 850x1100 Oringal Cake .jpg)

(1.59 MB 1024x1536 lora3.png)

>>23373 yea I made one for the exercise based off of cake's images, out of respect for their work, im not using it or uploading it. Sorry if this rustles anyone's jimmies.
>>23373 dude those results are amazing. Any chance you could share the settings so I could try to generate some?
>>23393 Just grab a Belle model, or whatever model you want from civitai, prompt the lora model when you are using a diaper model, and then just clean it up with inpainting. All my settings are pretty much default. I find that using the realistic vision model when inpainting produces the best faces.
(126.86 KB 1280x1920 jennette mccurdy)

haven't seen a lot of requests being made. Could somebody do one of Jennette McCurdy in a wet diaper?
>>23373 which diffuser are you using to make these?
>>23413 I literally grabbed everything off of the civitAI website. Just go there and search. There are a ton of checkpoints of tons of people and styles. Just use them with the a1111 webui. Here is one that only does skirt lifting/upskirts.
(1.81 MB 896x1216 nice.png)

(6.10 MB 1792x2432 upsacled.png)

(653.89 KB 1792x2304 upsacledfudgeuphand.jpg)

tried my hand at remaking belle, not bad
(387.93 KB 2048x3072 Diapered Ari 1.jpeg)

>>23428 I just started working with AI art and everything I do gives extremely similar outputs, even with drastically different prompts. I'm using the client side version of Stable Diffusion and DiaperAI and I cant seem to get upskirts or diaper changing pictures (or just open/half diapered.) Any suggestions on what my next step is? This has been one of my better pictures IMO
>>23460 maybe you use fixed seed. To use random seed you need -1 is in the seed textbox. DiaperAI not trained much on diaper changing pics, For upskirts you need to use lora or something that trained on upskirts arts/photos
(371.77 KB 2048x3072 Diapered Ari 2.jpeg)

>>23512 So I actually figured out how to force upskirts! I started with a skirt, found a pic that was a mix of skirt/diaper and I used the DRAW feature to color in blocks for the skirt and diaper! Pic definitely related. Does diaper AI do anything besides the more adult style diapers? I'd REALLY love to see some Goodnites or like the pink girl's pullups. I found some .PT files that are supposed to help, but can't figure out how to utilize them.
>>23515 It was trained with it in the images, but it wasn't really able to learn it. Also of course it can't do such advanced concepts, it wasn't trained on anything but people wearing diapers, no open half down or anything, that is really advanced and takes a lot of data to make work, as well as either using a lot of abstract tokens to make work (like ohwx = pullups) which is just a headache to use in practice IMO or A LOT of images with some distinct but semantically understandable images to accompany them. At least that is the case for making it work in a singular dreambooth trained model. LORA on the other hand could do any one specific concept on it's own. That would on the other hand require quite the library of different lora files that becomes quite a mess, at least for my liking.
(883.34 KB 2048x3072 Diapered Loli Ari 1.jpeg)

>>23535 So if I wanted pullups, goodnites or a diaper change image, I'd have to make my own model? Here's a more Loli version >>23517 I havent messed with cartoon/anime stuff yet.
>>23541 stop asking anons to make art for you, order a fucking commision
(926.23 KB 2048x3072 Diapered Stephanie 1.jpeg)

>>23541 Sadly I couldn't find any models for her. I spend a little over an hour fucking around with settings and prompts and ended up making this. I'm struggling to get it any closer. I tried to make her a little older but couldn't get the prompts to do it without fucking something else up. >>23543 Meh, it was good practice for me.I learned some new tricks.
>>23543 Yeah and also, lazy town stephanie's actor was 13 or so.
>>23545 Looks pretty good. How does it go with brown padding?
(1.89 MB 1024x1024 00040-2354038042.png)

(323.39 KB 512x512 00061-3299728271.png)

(303.08 KB 512x512 00056-3542658826.png)

(289.18 KB 512x512 00184-2687098599.png)

(343.36 KB 512x512 00074-2281376519.png)

Created my own lora using old images from BabyDoll. Does half decent medical diapers. Looks good at like a 10th of the space of a regular checkpoint too. Still figuring training out. It leans super heavy on front facing topless pics though.
>>23655 Do you use DiaperAI as a base model?
>>23675 Nope. DiaperAI is great but it doesn't seem to play well with any LORAs I have tried with it so I opted to just use the standard SD1.5 model to train it. Although you can use it with any model you want. They all seem to work to varying degrees with it. I may upload it to civit if I get around to it.
(968.04 KB 768x1024 AHA-Model.png)

>>23655 nice, funny i was working on the same thing but for ABU kings. I'm Dropping my ABU-Kings model in the diaper2000 mega, also made a lora for it if anyone wants to try and mix it into some of the newer models. Trigger is is "AHA-Diaper"
(727.78 KB 1024x1024 00028-931445633.png)

(1.18 MB 1024x1024 00007-3743729696.png)

(1.22 MB 1024x1024 00043-606643783.png)

(1007.33 KB 1024x1024 00036-3739512933.png)

(1.01 MB 1024x1024 00033-275944728.png)

>>23744 Is this the same lora? https://civitai.com/models/20042/abdl-white-diaper Couldnt get it working properly
When I first started seeing AI generated diaper porn a few months ago it all looked completely unappealing now in just a few short months I'm seeing pics pop up all the time that are actually fappable. If it keeps progressing the AI revolution is really going to be huge.
(993.67 KB 1024x1024 00006-3807155243.png)

(994.46 KB 1024x1024 00034-2674232521.png)

(1.03 MB 1024x1024 00007-4070127862.png)

(902.03 KB 1024x1024 00034-1052382880.png)

(1.05 MB 1024x1024 00030-2049954051.png)

>>23748 This is a model I created using The Last Ben Colab. It's using about 500 images of girls in diapers, mostly amateur content. I have generated hundreds of images as I tweak it more and more but I'm still learning. Turning it into a LORA is the next step for me once I learn how to do it on Colab (my computer is not able to run SD natively). Thanks for linking that Lora, I'll try it out. More images from my model:
(1.01 MB 1024x1024 00121-2837476229.png)

(979.98 KB 1024x1024 00005-2674470533.png)

(1022.01 KB 1024x1024 00018-2208893809.png)

(1.09 MB 1024x1024 00089-1970226047.png)

(1.08 MB 1024x1024 00216-2624205084.png)

(393.62 KB 512x512 00038-2049501595.png)

(340.02 KB 512x512 00029-2118591434.png)

(403.13 KB 512x512 00035-251694223.png)

(459.88 KB 512x512 00032-1583302318.png)

>>23748 That one is actually mine. make sure you use it on a sd1.5 or 1.4 model. for the best results. You can play with the weights as well to get better results on non 1.5 or 1.4 models. I'm trying for a goodnites lora now, but my results end up looking too samey. It's rare that you get a full body shot. So I am manually using SD to add faces/change locations so all my photos are distinct. Here are some samples from the 1st goodnite lora I created.
(1.12 MB 3024x4032 skn0t9qoz3la1.jpg)

(3.15 MB 1152x1920 00328.png)

Has everyone checked out openoutpainting in stable diffusion? You can take some well taken but pretty "meh" photos and really go crazy with them.
>>23764 dude we NEED to collab lol. Ive been trying to learn how to train LORAs to make goodnites but I have some questions. Any chance I could find you on discord? I have thousands of goodnite pis or I will even sit there and edit them, anything to help get goodnites (and eventually pullups) in to this!
>>23774 The trick is to have high quality unique images. Having pics of the same person, or images with the same background will end up generating very similar looking pics. Then make sure that you caption them well. I use kohya_ss to train on my 8gb 3070 and it takes between 15 min and an hour to generate a model depending on how many images I give it. There are guides online that can teach you the basics. I am on discord, but I'm not sharing that on 8chan, lol. Alternatively, separate all your photos by goodnite design, caption them, and upload them to mega and either myself or another anon can grab them and try to make a model.
>>23779 I was trying out dreambooth. I followed a youtube guide and IDK what Im doing wrong but it ran for like 3 hours and still didn't get anywhere close to working. The video used a Face as an example so maybe the settings dont work as well for this? A couple questions, if you dont mind. 1) What model should I be using to build off for the LORA? 2) How many pics should I actually use? 3) Apparently I missed the part on how to caption images, how is this achieved?
>>16220 What if they train the model to generate faces similar to Faceapp's Babyface filter, would that give interesting results plus the 3D anime-style eyes?
>>23779 I downloaded Kohya and found a video that explained a lot of info I was missing. I'm running a test model right now to see how it turns out. I'll report back with my findings lol
>>16220 Oh also try to recreate snapchat's babyfaces filter.
(501.30 KB 512x712 00030-38057808.png)

>>23779 a follow up to >>23793 this is the closest Ive come so far messing with the settings in a1111 with my LORA. Just seems like my model wasnt trained well enough, but it's clearly TRYING. I may have to go back and tweak the training. Also, face looks like shit... and yes, I am the Ariana guy lol.
>>23764 At this point the more I learn and try, the worse results i get. What settings are you using? I just keep fucking it up more lol. Ive created the images, Ive made the txt files, but none of the settings seem to make it better, just worse. Im not sure if its the training rates or what.
>>23800 If you aren't already, try putting a keyword or phrase at the start of all captions, like "goodnite" or something, and use that in your prompts. I was running into the same issue of having my embeddings just not working until I did that, as I assume it's giving it something to latch onto that is common between all of the training images.
nice, diaper aiv2 is out https://civitai.com/models/4714/diaperai I also uploaded a Paci Lora to the mega that works with it.
(427.43 KB 512x512 100_Aifog-Paci.png)

>>23846 How many training images did you need to get all of those trigger words? Holy shit that's a lot.
Just to be clear, It's not mine, DiaperAiv2 is the civai user lifania's model. I just realized they merged in my Paci Model. Thanks for the mention, if you're reading this btw. It's 168 images.
>>23848 The [#] number is the amount of images with that tag in them. Most of the patterns don't really work its more a proof of intent which is why I need to use a ton more images. >>23849 Cheers. Its kinda weird how the model still can't do pacifiers despite a huge chunk of the photos with a face had them using a pacifier and I did caption it.
still struggling with LORAs. I even went from 20 images to 50, even upscaled the details and my LORAs give worse results. I am very confused.
>>23846 Anyone here on a free opportunity to replace the mouth with a pacifier would be much appreciated.
ai fun
>>23900 The original picture this one is based off is always a favorite of mine.
(3.28 MB 640x480 1659809408265782.webm)

>>23908 oh, I love her. did you know she was in "Girls gone wild"?
(390.53 KB 512x512 53023798827.png)

(299.67 KB 512x512 mbnjkguyghj.png)

(313.07 KB 512x512 tmp4jbj4c1o.png)

(294.21 KB 512x512 tmpimoing1g.png)

>>23912 No I had no idea!
>>23883 do it yourself, I remade the paci lora for SD 1.5 and just uploaded it to civitai https://civitai.com/models/24424/pacifier-lora-sd-15-based
dudes... we need to train models. I have 200+ gigabytes of abdl videos in good quality (1080p, all ripped from jff) and links to another 300+ gigabytes. We can break them down into frames and train regular models. Or feed all into text2video one. I think we really need to crowdsource pics and tags.
lifania, what about releasing some part of training dataset so this data can be used as an example for a crowdsoursed dataset for finetuning your model?
>>24027 I would be on board. Maybe someone could start an AIDiaperArt discord and do all the setup and rules and shit (never done it otherwise I would) so we can communicate faster?
>>24042 this post was actually meant for >>24026 not >>24027 but i'd be down for this too
>Maybe someone could start an AIDiaperArt discord and do all the setup and rules and shit. 2 hrs to write a rules. https://discord.gg/FYUrgMrdBd
>>24074 Yes, thank you. But what prompt do you use for pics for the v1 model?
>>24086 I dunno. They got lost somehow, which is why they're not in there
>>24074 are you interested in training a model on arts?
>>24095 It is my plan to at some point, but first I want a proper photorealistic model that doesn't produce wonky looking diapers with an asanine hangup on closeups But I could have lost interest before either one gets fulfilled
I have no interest to still pics generated with ai. It'll always be like poor copy of human made art and lack personal style. Someone visits museum or gallery, I visit pixiv. (Gonna visit japan some day though to see art and artists in person at comuket, omufest or something.) However, it would be nice if ai could help generate animations/ugoira in future. Making good ones is very time consuming even with help of current computer programs.
>>24103 I have plans to try... With use of ControlNet i think we can do something nice looking... Not in the style of Linkin Park last music video...
>>24101 With training data that is literally 80% is diaper closeups you just CAN'T do it. We need more training data, more REAL photos. And not only in 512x512 res.
>>24107 That is V1, what does that have to do with anything?
(1.21 MB 1024x1024 00731-2531966584.png)

(2.79 MB 1280x1536 00771-3165500666.png)

(651.05 KB 640x768 00784-773736337.png)

(744.92 KB 640x768 00794-2257427055.png)

(2.97 MB 1280x1536 00809-2408814991.png)

I love throwing random shit at AI man, should really do it more
>>24164 Excellent results
>>16220 Why limit ourselves only to character generation? We could also start training the AI to generate AB/DL-themed nurseries, rooms and even adult-sized hard plastic houses. We could also train the AI to generate infantilised adult objects in the purest Fisher Prices style.
>>22686 If anything this just early stages of a technology. This will evolve even more. It will be able to create entire movies. You will just "create a movie direct by Wes Anderson about a girl who decides she wants to be treated like baby". It is just a matter of time.
>>24172 Unless "Open"AI has anything to say about it
>>24184 That's a nice elf
>>16220 With a stroke of luck and without using Discord, I was able to generate this in Wombo AI.
This is probably not worth a thread on its own, so I will just post this here. Washington Post wrote a news about people selling AI porn and I find it curious that one of the guys they interviewed was making AI diaper porn to sell. Sadly they didn't publish any image for us to check out. https://archive.is/2023.04.11-144319/https://www.washingtonpost.com/technology/2023/04/11/ai-imaging-porn-fakes/
>>22416 This made my day. Thank you so much. So great.
Anyone knowledge enough to, not only make photos of girls on diapers, but also create images of them on childish objects? Like a baby bouncer or something? Something like this, but more photorealistic-ish?
(1.39 MB 1152x1600 00476.png)

>>24476 AI hardly knows how to make actual toddler stuff, it would probably have to be trained on those objects before it would be able to change the scale of them correctly. Inpainting might be able to do a better job. I was able to make that pic look a bit less uncanny with it.
(61.12 KB 1024x384 1681798148421250m(2).jpg)

I like how this one came out, dig the mcdonals branding on it.
>>24513 Thanks. Yeah, I was imaging it wouldn't be that simple.
>>24623 That middle one, damn fine How does it get with messing?
>>24638 These are incredible what was used?
I fear no man. But that technology... it scares me.
Trying to make a Lora to work with the diaperai checkpoint,did any anon have any tips to got good face recognition ? I’m testing it with a lora with my face on it and while it work fine for the base SD 1.5 model, it just give me distorted face with the diaperAI one.
>>24813 are you training the lora using diaperAI as the base model? they aren't interchangeable
>>24876 I trained for both,and to be honest the base SD model trained lora is giving better result. I finaly got okayish results,mostly working on negatives prompts. It make an huge diff.
(468.92 KB 512x640 00004-2532945456.png)

(448.01 KB 512x640 00007-3054140946.png)

(469.05 KB 496x800 00017-757287908.png)

>>24884 Instead of tediously adding words to the negatives, use embeddings instead Use these two together https://civitai.com/models/7808/easynegative https://civitai.com/models/5224/bad-artist-negative-embedding Or use this one https://huggingface.co/FoodDesert/boring_e621 You can also use them all 3 together, but I find that really restricts the results it can gen. Then I generally prefer having "(worst quality, low quality, normal quality:1.25), out of focus, blur" as well in the negative prompt. So currently my negative prompt for default generating these days looks like: "(boring_e621:1.2), (worst quality, low quality, normal quality:1.25), out of focus, blur" Simple and easy to adjust. DiaperAI is sort of scrapped together from overtrained models I trained and merged down into an useable ones, so it's prone to being quite quirky in the results as it's not really a healthy model to begin with.
>>24885 You,sir,are a gentleman and a scholar. I'll be sure to try it,your results are miles ahead what I get for now. Any tips for the prompt itself?
(386.14 KB 512x640 00032-1339616464.png)

(392.63 KB 512x640 00065-2128902683.png)

>>24889 Neh, just throw shit at the wall, see what sticks for ya. Play around with clip skip. Clip Skip 1 has the largest tendency to do close up diaper shots, 2 and 3 tends to be more willing to do full comps, but there are no fast and steady rules. Play around with merging with other models too.
I'm a bit confused by all the image generated AI stuff, I have some questions about it and I would appreciate if someone could help me about it 1-Do i need a strong pc to generate it? My pc is strong but not crypto mining for hours strong for example 2- Some of the images here are really good, do I need to know programming to get similar results? 3- Do I need to pay for these results like a premium service in one of these? 4-All links so far are really confusing to me, some just send me to prompt websites where I can download or buy them and other send me to huggingface or github which I really don't know anything about Thank you very much in advance for whoever answers me
>>24902 I'm not the most knowledgable anon but I'll try to give some answer. 1- you can generate pics with as low as a rx580/1060 but this will take a long time per pic. It become more and more confortable with high end gpu. I've got a 3080 znd a pic take 3-4 sec to generate,which is very comfortable for testing. 2- You don't have to know how to code. Having some basic knowledge about how theses algo work and the underlying concepts will help though. 3- No,you can self host stable diffusion and don't have to pay anything. 4- this is the most trickier part. I think you need to be moderately technical to be able to navigate through all of this. If you never used github and get the chill everytime the word "command line" pop, this will be hard for you at the start. Nothing impossible though but you might need to involve yourself to do your research. There are tons of very good resources out there that can point you up in the right direction, seek for the keyword "guide" in civiai will gove you some of the best. Good luck!
>>24905 I'm using a 1660 super, it has 6gb ram is that okay? I'm trying right now from a guid on civiai, so far it's only installing somethingfrom collab
Alright, I managed to get stable diffusion to work on collab, I've created a few images but none of them had diapers, I tried using hassaku, What models and what prompts should I use?
>>24914 You can find diaperai on civiai. I warn you though,setting up stable diffusion in collab won't use your GPU but google's ones. This mean that per definition nothing you do on collab is private. If you want some privacy you should setup it in your local computer using the webgui automatic111
>>24915 Not bad at all, but I wanted something more anime style, DiaperAI only works with more realistic ones? Also, All pacifiers look completely disfigured, is there a way to fix this?
>>24916 Use the Anime edition of DiaperAI + awesome Pacifier LoRA from AiFog https://civitai.com/models/4714?modelVersionId=9623 https://civitai.com/models/24424
Photoshop gore :D idk how to add elastic band here. Model don't even know what is it. Cloning from a real photo... ugh, not my method
(1.06 MB 1024x1024 01a.png)

(350.00 KB 512x512 01b.png)

(1.03 MB 1024x1024 01c.png)

(350.27 KB 512x512 01d.png)

(1004.16 KB 1024x1024 01e.png)

Clementine alternative ending
>>24983 I understand that stable difussion can be integrated into photoshop and gimp.
>>24791 >Something i made for a chat in Beta character ai with what pront did you make it to make an illustration halfway resembling a nappy ?
>>16220 sorry for the request unfortunately I don't have a computer to run Stable Diffusion and I don't know if it is possible to run it on google collab without the censorship but is it possible to replace the panties with a pull up nappy?
>>25463 p.s.: stock photos of models wearing nappies help the AI to draw better.
>>25463 You only really need 4GB VRAM to run automatic111, you sure you don't have the pc for it?
(60.24 KB 500x531 4ay90r.jpg)

>>25305 Great pictures I wanna make some of it! how'd you do it send me to the rabbit hole! Do i make an account do I download something?
>>25468 not for my bad luck my pc was ruined and I borrowed a laptop that can only accept 2gb of ram and not even video but you made a new doubt arise in me, and if I connect the card through the port where the wifi card is connected, will it be possible to run? or will it be required to use more ram memory in addition to the video memory?
>>25562 Maybe maybe not, i dunno. Some people have made it work with just 2GB VRAM as well. There are commandlines that can be put in for both low vram and ram (just not both at once, since they offload workload to eachother) If you're RAM limited here is a discussion where some people try to figure out ways to make it work: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6779
>>23369 Bit late (Tor posting was blocked for a while, and images are still blocked) but here's a few more uploaded to Mega: https://mega.nz/folder/ZFgCyaJT#yHnnLMGFJcaKD9qgXNOhkw >>23845 Seems like you might've already figured this out, but when you're making a lora or dreambooth the words you use for it really matter. 100_Aifog-Paci is a really bad choice because it tokenizes into "1", "0", "0", "_", "ai", "fog", "-", "paci" (or something like that). Using rare tokens or tokens that are already close to the desired goal can really help. While a lora does change the text encoder, it doesn't change the tokenizer. With Textual Inversion it doesn't matter since that does its work before the tokenizer is applied. One thing that works really well is to start with textual inversion, and then do lora training using the results of the TI in the prompt. Sometimes this is called Pivotal Tuning. Not all scripts support doing that though. Anyway, Waifu Diffusion 1.5 beta 3 seems a *lot* better at making diapers than beta 2 when using a textual inversion (even with the one I trained for WD1.5b2), though not quite as good as a merge of wd1.3 with an actual abdl/diaper model.
>>25565 I appreciate you sharing the link I'll see what I can do to get SD to run with what I have and if it still doesn't work then tend to resort to the image modification thread.
>>21661 Played around with diaper3_diaper3_5900_0.85-SD15NewVAEpruned_0.15-Weighted_sum-merged.ckpt a little. I'm guessing based on the training data it was mostly this cartoon style and the faces kinda all came out looking similar, but still decent results.
(53.28 KB 640x640 1609099616339.jpg)

>>25675 Am I correct in guessing these were trained in part using pics of Mihina's face (pic related)?
>>25565 But the speeds are shit. I have 1660s, speeds still shit if i compare to colab speeds. Waiting 30 sec per pic (512x768, fp32, xformers, no medvram, without hires fix) is not something that i want to do. Especially when i'm trying to experiment with prompts/inpaint diapers (good diaper can be drawn with at least 15 seeds. 15 * 30 = 450). Spending a hour when making a pic is a usual time for me. But spending a 1/2hour when WAITING UNTIL YOUR PC INPAINTS SOME DIAPERS is not good at all. Waiting MANY MORE seconds with 2gb is... disgusting. And yes, your CPU and RAM can affect speed, at least model loading times and webui start-up times (10 MINS ARE YOU SERIOUS?!?!?). Better spend this time trying to really draw some diapers. Or touch grass.
>>25718 Keep in mind that nothing you do in Collab is private by design though. I know it suck to have low hardware issue but imo when you deal with fetish stuff you’re better with sucking it up and dealing with what you have rather that risking selling you out to big corpos. What if you accidentally generate a kid pic ? It can happen when you put on the wrong negative and it can easily get you framed with pedophilia. I wouldn’t risk it.
>>25719 I live in a country... Who doesn't care. Btw you can trim models.
>>25718 colab is banning NSFW ai generations, we'll see if your gens fit into that or not. Good luck. I had a 1660 before I upped to 3060 for the VRAM and never felt it was slow or painful. Now we even have Torch2 which speeds up generation speed significantly. Use a fast sampler like Euler A and keep it to 20-30 steps and I don't really see the problem personally. But we all have a different feeling on what is fast or not.
>>25727 Now colab banning all SD generations on free tier, and I can't even buy Pro and test it (same shit with other platforms, i can't buy anything. ). A couple of months ago colab wasn't banning anything, my friend was generating tons of porn, he still can use colab. Absolutely all of my work (which is now crap by my quality standards) that I have on my resources (DA and Pixiv) was generated in colab. I have enough nsfw works. (still not banned). @SheogoratEvil on pixiv and DA, check if you want > Use a fast sampler like Euler A and keep it to 20-30 steps I can't. DPM++ 2M Karras is generating better diapers (With diapai2 at least). And DDIM is good for skin texture when using Lyriel or Deliberate.
>>25748 You can use Euler A for fast gen, when you find something that looks good, you can use DPM for a polished gen. If you have multiplied ancestral noise set to 0 Euler A creates very similar results to the other samplers. The only reason it is the wild one along with DDIM is because they have the ability to multiply the noise, creating very different results. Specifically I'm talking about "eta (noise multiplier) for ancestral samplers" under "settings > Sampler Parameters"
>>16220 For the first time I finally managed to use diaperAI, the bad thing is that it was through the pixai.art website, but I still liked the final result. to get this result, I used an image generated by catbrid.AI.
>>25813 Subsequently, I asked the AI model from the pixai.art website to generate image variables for me using as Lora, which was supposed to be designed for dummies as such the latter didn't come out so well, but the nappies are still a good result.
(228.51 KB 1168x945 IMG_1026.jpeg)

This is slightly off topic but didn’t think it deserved its own thread - how would I go about digitally coloring this image? Or, can any of you do it? Thanks in advance and keep up the great work!
>>25876 Pallette.fm can do that. I'm sure you could probably use a controlnet model to do it as well, although that might alter the image slightly.
now i read into SD and I am trying to do high res on my gpu server using max vram, but somehow there are green spots appearing like here bottom left and right eye. Is there a way to avoid this when using highres fix?
>>25920 discolored circles means the VAE you're using doesn't work properly with the model. So use a different one.
(694.60 KB 1024x1776 00005-3867493023-0000.jpg)

(667.90 KB 1024x1776 00006-3867493024-0000.jpg)

(656.96 KB 1024x1776 00009-3867493027-0000.jpg)

(718.78 KB 1024x1776 00011-3867493029-0000.jpg)

(755.60 KB 1024x1776 00017-3867493035-0000.jpg)

diaperai_v2 does a really good job at my niche fetish of diapered knight women
(3.56 MB 1024x1776 00002.png)

retrained the Paci lora on the anylora model https://civitai.com/models/24424?modelVersionId=88663
>>26141 This is certainly niche but i love it, got any more?
(438.25 KB 512x800 00328-2610422104.png)

(513.80 KB 512x800 00327-2610422103.png)

(545.65 KB 512x800 00322-2610422098.png)

(527.05 KB 512x800 00314-3226508128.png)

(509.26 KB 512x800 00316-3226508130.png)

First day attempt at using stable diffusion part 1
(547.06 KB 512x800 00319-3226508133.png)

(555.81 KB 512x800 00317-3226508131.png)

(310.48 KB 512x512 00256-3937912573.png)

(457.31 KB 512x768 00183-6.png)

(512.63 KB 512x768 00171-2.png)

part 2
(546.60 KB 512x768 00168-7.png)

(429.78 KB 512x768 00169-8.png)

(401.42 KB 512x768 00153-8.png)

(480.02 KB 512x768 00154-1.png)

(396.91 KB 512x768 00151-6.png)

Part 3
(437.41 KB 512x768 00149-4.png)

(518.47 KB 512x768 00130-1.png)

(336.58 KB 512x512 00077-735761685.png)

(474.38 KB 512x768 00116-3.png)

(450.18 KB 512x768 00118-5.png)

part 4 (last) This is pretty fun!
Can I request an Anime style ABDL male in his 20's with brown hair, brown eyes, wearing glasses, a slighthy chubby body, and wearing a plain white adult diaper with blue tabs? Please keep it SFW, no wetting or messing.
trying SD after many a try and moving around prompts Lora's ect this came to me
>>26141 I didn't know how much I needed diapered knight girls.
>>26282 >>26283 >>26284 >>26285 Some aspects (like expressions) look a little off but these are great! Especially the ones with the blonde in the last two posts. Why is it so hard to find some big tittied, bimboish looking women in diapers?
Question on creating AI diaper girls: If you provide a fully rendered image of an AI generated diaper girl, what series of keywords would be most likely to produce the same girl in other poses? Is that something that is possible?
>>26388 AI typically wont create the same looking person unless you specifically trained it to look like a specific person. You can try keeping the same seed between generations or train/use a model that looks like a specific person.
>>26389 Appreciate the insight. Someone over on the Diapered Lolis III thread produced some lovelies, and one of them really caught my fancy. I'd love to find a way to produce a whole series of her.
>>26388 If you use ControlNet and or Kohya or AfterDetailer it is possible to take the same seed and prompt but change the pose (control net) or direct certain LoRAs towards a specific part of the image or one of several characters in the output (kohya) or just replace the face after generation (AfterDetailer). Using dreambooth and potentially other ways of training, it is also possible to take a single image of a face or character and train that person as a certain tag, make a lora of that, and apply that lora when using the above methods, all this is kind of assuming you are doing automatic1111 on local but maybe some of it can be done in the cloud, I would not know. The single image training of loras is called 'imagic.' There is also inpainting and img2img in general to consider for this idea, but in general the simple approach is generate using tags about a character who is similar looking to what you want and exists in danbooru images, then use afterDetailer to automatically mask out the face and replace it with exactly the face you want.
https://civitai.com/models/88065/omorashi-omutsu-yuzu Ok fellow coomers I spent months of trial and error trying to merge up a perfect starting point to train this model from, its in part anylora but has a lot of the other pee and diaper models and loras merged into a frankenstein that didn't fuck up faces or anything too bad but still let loras boss it around and knew what the hell peed in pants and sheets actually look like, not to mention messy diapers. This is an all purpose wetting, messing, bedwetting, diapers or not, all in one anime checkpoint. I trained it on no specific artists so its not going to be as fixated on a certain style as the one good messing lora that was recently removed from civitai.
>>26441 Hi, that looks really nice. Do you know where I can get diaperMessV3? Thank you
>>26445 Thanks
>>26441 wow, it's actually pretty good. I'm a little sad it seem really anime only, I would have loved to have a model with this quality for real life pics.
(429.51 KB 440x656 00358-3563270163.png)

(381.22 KB 440x656 00369-2484790961.png)

(419.68 KB 440x656 00359-3563270164.png)

(1.78 MB 1024x1536 00373-2053690630.png)

(379.54 KB 440x656 00353-2251811082.png)

>>26455 thats phase two anon, I needed to do this first because danbooru tags are just better for this content, and generally easier all around to work with. I already have mostly prepped datasets for training the realistic one, and I already did a merge of this with some realistic models to get a basis to do that training. I just have to retag the photos in the new style I found through trial and error to be better, and then spend a few hours in dreambooth training. If you want a poor mans equivalent to close to what I am working from, perhaps merge my model with henreality or some other very overpowering realism checkpoint at like 50-50. or favoring this model a bit. I did something with a lot more steps than that, but I won't upload that till I train it. I deliberately tried to make it stop doing anime and still respond to these tokens, as opposed to making it retain every bit of relevant style and structure except as real as possible. Since I am training it anyway, its more important to get it to be highly realistic and just finetune it with actual pants wetting, diapers, etc. So basically what I have at the moment is a little under-responsive. heres a sample of where its at, (see attached pics) it kinda forgot how to do proper wet spots to some extent, and is making diapers look like maybe a cloth diaper or just partially clothes. The last one is what happens when you give that checkpoint an anime LoRA (haruhi suzumiya). The realism overpowers it and it makes them look like that alita live action movie or something. Neat side effect I discovered by accident. Notice also the smaller samples are not High Res Fixed, High Res Fix is the magic to making this work like anywhere near this well, but its not like you want to be fapping to images for ants anyway. The important thing is that it doesn't need so much HR fix that its destroying the stains and wet spots or anything. The realism version might be tricky to get right but it will not take me that long to have at least a version of it to post I believe. I will probably play it safe and NOT mention that it can even do messy on the civitai description. But it will be able to I am sure.
>>26456 Godspeed anon,thank you for all your work. Eager to see this realistic model.
>>25783 I mean, it's pretty trivial to edit those images on Photoshop or so.
>>26387 What model did you use to make theses?
just been fucking around in seaart, anyone have tips?
>>26500 What kind of prompts are you using? I've been messing with it myself after I saw that post and I can't seem to get it to do what I want.
>>26532 this is really cute! love how simple it is. great job.
>>26533 you have to set the model, diaperai is easiest to get diaper results but instead of "diaper" stuff like "(diaper:x)" where x is like 1.2 or just higher than 1 adds emphasis. >>26532 these are quite nice teach us sensei
Calling out the AI masters here, I need advice to curate a dataset. I’m trying to achieve a LoRa of my gf so I can generate infinite pics with her in diapers (she’s okay with that for the record) The results I got are not « bad » but I think could be a lot better. I think the model overfit a lot as I’m always getting the same pose and angle. My dataset is not really top notch so I want to first tackle this.Given that this is real pics,should I use anime tag method or photo tag method ? This should be obvious but I seem to get better result from anime tag. Also,I have an hard time selecting pics. Should I go for less of higher quality or just dump as much as I can?
>>26532 try some of your promts in the model I posted a link to, I want to know if you think its as good as or better than diaperaiv2, or not. I am curious if its just how I write prompts that gives me good results with mine. realism version still in the works. >>26542 I have not yet trried to do something like this, and I find your story not entirely preposterous, so I will add that the scouring of the internet I had to do to find scraps of wisdom on what to do to train well have led me to believe that for a person or character, you have to tag only details about that character and nothing about the rest of the scene. Be fairly sparing about it too. DO NOT AUTOMATICALLY TAG THE DATASET WITH A TAGGER/INTERROGATE AT ALL TO TRAIN A PERSON. Just don't even bother trying to its a completely counterproductive thing to do. Use the checkpoint you want to apply it towards to make the lora, unless you are sure its fairly similar to anylora or whatever other model. You don't have to care about others being able to use it (nor should you) if its your gf, so might as well make it more usable this way. It might even be preferable to tag literally nothing but the token you are training to be her according to some advice I saw. I would link the reddit threads that were good for this but apparently the SD sub just went private... Use booru style tags because from what I can tell all the diaper checkpoints use them already anyway, I think even the realistic ones. My prototype for a realistic diaper and omorashi checkpoint will be that way and anylora checkpoint is that way too. Don't do what older tutorials said and use a three letter token that doesn't mean anything in any possible context, or any other idiotic way to make it a token for the lora. Either use no token at all and only use it to make images of one person scenes, or use her actual name or phrase like "blonde diaper girl" that makes sense somehow. The things you pick for this are not arbitrary and picking something thats obviously a name or descriptor will make it work a lot better that sks or other shit like that. also be a bit verbose about the details in your prompts once it works literally at all, choose tags that would apply to how she actually looks, her actual hairstyle and color or body type would be examples. If you do manually tag the images, consider that it will somewhat affect those tags too when the lora is enabled but the main token wasn't included. Be sure your dataset has well lit images of her face from more than one angle. For a person, do not use prior loss preservation at all. If you want to train it on a concept you'd need that set to some value between .1 and .4 probably, and you'd need class images, but for a person, do not use class images or class tokens at all. It will only get in the way. Set the learning rate slower than the default like 5e-7, and test it with what you would actually make images with because in my experience the samples generated by the training apps and plugins are always dogshit compared to if you make the checkpoint or lora. I am not sure how many epochs to recommend because I have not done this specific thing so far.
>>26543 Solid advices here,thank you anon. A lot of tips you can find about it are like 2 month old but I feel like it’s already outdated in the fast pace moving of AI world .The biggest hurdlle here is to find a good testing workflow and modify only a parameter at a time,and while it’s easy for epoch and learning rate,it’s much more painful for dataset modification. I’m quite interested in the fact that you say to train the LoRa with the model used. I tried it with diaperai 2.0 and got very bad result while training it with SD 1.5 and using diaperAI afterward led to far better result. I might try again with a better dataset,but still,my empirical results don’t match the logical output I should have got. Got a clue here?
>>26532 Wow nice. i just got SD + DiaperAI. What prompts or settings are you using to get results like that? Can you give an example of how detailed etc a good prompt is? Im running "vanilla" plus DiaperAI at the moment. Would like to join in oh the AI Content :)
>>26545 I found diaper ai to be hard to train a lora on for some reason. Maybe your way is better, tbh I didn't have a ton of luck with lora training in general, and had more luck with making a lora using supermerger to diff a trained checkpoint from the original I trained it from. But if you are only getting overtraining of your gf and not much burn, you could just lower the lora strength and see if that improves the diversity of poses. I usually train till stuff is a little overtrained then use merges to smooth over the parts that were not intentional to alter at all, it might sound kind of dumb but idk it works, but idk if that really applies to a lora the same way. I guess you could bake it into the model then merge that? supermerger does that. Overall I would say if you do retrain, use literally no tagging for the dataset and only the token for your subject when its a specific person, and its probably better if its "blonde diaper girl 1" versus "Heather" in that scenario, but still be succinct here. >>26547 heres my tips anon; Learn what relevant danbooru tags are and what they mean. read up on negative prompts and then just download a few negative embeddings because they are far far better usually imho, but still maybe add a few negatives besides one or two neg embeds. start all prompts with quality tags, then basic details like 1girl, solo, nsfw, etc. Then write a succinct sentence like if it were a wikipedia caption, then just exhaustively list relevant tags loosely ordered by relevance and clumped with other tags about the same element of the image. example is how I structure virtually all of mine: positive: (masterpiece, top quality, best quality, highly detailed:1.2), <=this is the quality tags, never forget them they make it more good extreme detailed, colorful,1girl, solo, <= these are generic basic tags I usually use or swap for relevant alternatives like 1boy this part is the sentence caption bit, diaper, peeing, have to pee, pee puddle, wet pants, leaky diaper, <=these would be examples of phrases that may or may not be real tags on a booru, but they are things my model has, and even before that or with say diaper AI it will recognize combinations of a word with another tag and maybe work, maybe not. but this is the verbose list of tags part. try to put tags for one character or another next to eachother, or use control net to actually assign them to different people in an image. Negative prompt: easynegative, bad_pictures, <=those are negative embeds look on civitai, there are many options I like those two for diaper stuff. (text), signature, watermark, username, blurry, out of focus, artist name, <=those are negative tags of stuff I want it to not include which I always use in general, I would say infinite image browser is good for reviewing the outputs and lets you search old generations by tags. Supermerger is a better merging tool, dreambooth is one option to train stuff, and depending how much you want to do like inpainting or if you want krita integration for that, I have some opinions on that stuff. Wildcards manager is good for permutations on a prompt with a list of options, especially to learn what tags do or explore a prompt by altering it in chunks. Set your sampling method to DPM++ SDE Karras and basically use that all the time. cfg should range from like 4-12 with the usual sweet spot being 6-8. turn it down if things look... strange and bad in a specific way called burned. turn it up if things are just not similar enough to your prompt. Steps is loosely a measure of quality, but it will alter the composition to crank it up and make it take longer. theres a limit to how much it helps, past about 40 its usually not really improving at all in this sampler IMO. I usually keep it between 20 and 34, and whatever is not good enough at 34 is probably better fixed with HIGHRES FIX. Don't use restore faces use highres instead, its faster and better at it IMO. Wait to use highres fix until you found a seed that works for a given prompt to make a decent image at the other settings, and then let highres make it larger and fix problems. the initial resolution can be changed to achieve the desired aspect ratio, while keeping stuff around 512 on one axis. I usually do 440x660 or 512x768, but this is basically arbitrary. You can set it as high as your patience for seed hunting once a prompt looks like it kinda works. If you change resolution at all, it 100% alters the entire image, so never change this once you start trying multiple seeds. Just use highres to make it bigger at the same aspect ratio. Highes should be set to upscaler R-ESRGAN 4x+ Anime6B for illustrations and R-ESRGAN General 4xV3 for anything else or photoreal illustrations and cgi looking styles. Denoise controls how much it will change, if you like the image and its not very fucked up except in a few areas like an odd eye set it low like .2 to.3, for a bit more fixing use .3 to .4, and if you don't care much about the details go up to 50, but above that it will tend to radically alter composition. highres steps is sort of how much it thinks about how to fix it. at 0 it will not change beyond upscaling it which will cause some blur, at 10-20 it will generally sharpen and add details, and at 30-40 it will maybe fix even really fucked up faces but also maybe start to alter compositional elements like moving a leg, at which point it might guess wrong about what to do. I usually use settings like this to just batch 8 seeds of a prompt to see if its good yet, -1 seed = random; Steps: 22, Sampler: DPM++ SDE Karras, CFG scale: 6, Seed: -1, Size: 440x660, then once a seed makes a good image I activate highres like this: Denoising strength: 0.38, Clip skip: 2, Hires upscale: 2, Hires steps: 24, Hires upscaler: R-ESRGAN 4x+ Anime6B I posted my model up here >>26441 so if you do try it, this is what I do to get good results from it like the sample pics. In general, you can see what prompt style seems to work for a model by looking at the examples on its civitai page. its a shame uploading pics here does not preserve that metadata afaik.
>>26545 actually I had another thought about this. Try setting you CFG lower with the lora you have still set to whatever strength was looking overfit a bit. LoRAs tend to burn or look overfit more easily at higher CFG as well as when they are set to strong, so maybe try that and see if what you already made is less locked into exact poses and stuff? otherwise yeah try literally only tagging the images as the token you are training and nothing else and see if its better that way.
>>26548 thanks, getting good result already. Is there a way to add "props"? i tried "holding a teddybear" - "Having a pillow next to her" and such. It never worked. Is that just too much or am i doing it wrong?
>>26560 if you want that level of fine control you have to use control net or maybe block in the composition with a sketch and use img2img / inpainting. Controlnet is really complicated but powerful. img1img sketch is unwieldy if you don't do art, and if you do the built in canvas in automatic is garbage, so the better way is the krita plugin or the minipaint plugin. but vanilla wise, your best bet is to just use any applicable boru tags for things like holding objects as well as for said objects, put them close together in the prompt, and hope it maybe does it in some seeds. Higher CFG can help with this but it quickly burns the image with distorted bad looking stuff without using plugins to offset that. (dynamic thresholding does this but I have yet to explore it)
I don't know completely if this could help you but it's worth mentioning I used to use stock photos of adults wearing nappies and in many occasions they helped me a lot to give me an excellent result to avoid watermarks because I used external pages to download the image in low resolution and then used tencent arc to restore the size unfortunately not all stock pages can be hacked to remove watermarks, for example on this site I tried to use AI to remove the watermarks but it still doesn't give good results and you ask me why this site? Simple this page of stocks has images that resemble enough the abdl and I have not achieved these same photos in other sites I leave the if they become to interest them https://es.123rf.com/photo_21986198_bebés-grandes-dos-jóvenes-niños-en-ropa-de-bebé-y-pañales-sosteniendo-osos-de-peluche-y.html
>>26493 My own. I created a lora specifically to generate onesies.
>>26560 What models/lora are you using? Adding stuffed animals should be no problem for a decent model.
>>26579 You did a great job, they look fantastic
how long until we'll have something where you just go to a website and type something in and it just works?
>>26584 The website will probably be paywalled and censored in some ways to comply with various laws. Nothing beat running it locally.
>>26579 Would you be open to sharing the lora, please?
>>26611 Nope, not at the state it is in. It needs a few hundred more diverse photos to get it not looking a ton like the training images. If someone compiled a better image set I'd be down to make another though.
>>26548 Thanks for all those details. I am using your model and suggestions and already have great results. The only flaw is, that hands, armes, leg and feet are often very strange. They are either missing, more than 2 or strangly "overlapping" when 2 characters are involved. is this a model issue?
(108.93 KB 584x778 test for AI.jpg)

(348.84 KB 512x512 diaper 20Kv1.png)

(355.27 KB 456x600 best result.png)

Hello all, this is my first time posting here, I found my way here through a discord hoping to get help with AI generated images. I hope this is the right place to ask. So I was messing around with a website that would do image to image AI. I tried it out but the website doesn't really generate diapers all that well. This is what I mean: https://twitter.com/OmutsuNami/status/1669771068565868546 So far, I have followed some of the responses, I managed to get stable diffusion running on my pc but it's seriously difficult and I am not nearly generating the same quality like some of you here. Specifically, what I am trying to do is make IMG2IMG generation where I am basically posing and I want an anime version of myself with a similar pose. I will share the results for the different AImodels: DiaperAi (2nd image) feels inconsistent but does often do what it's told. No anime version unfortunately. Diaper20K (3rd image) is more consistent but it usually cuts off faces from the frame. For some reason, adding "animal ears" to the prompt, generates faces a little more frequently. I also tried waifu diffusion 1.4 but that does not make any good results whatsoever. I need help and advice with: 1. Finding an AI model that can generate an anime characters based on my poses a little more accurately. 2. How to make different "art-styles" with diaper20000 3. How to make my IRL reference photos become converted into something like this >>26292 that image is impressive and super cute btw but I digress. I am very thankful for any help I can get with this.
>>26823 Try the diaperAI anime model. You can select it on civitai where various versions are listed. You can also download a LoRA and use it with an anime model like AnythingV3. Just search for the tag 'diaper' on civitai and download LoRAs. They work out of the box with Stable Diffusion and there are youtube tutorials on where to click.
>>26823 If you want Anime style images use my Omorashi Omutsu Yuzu model posted above, its very good at anime diapers and I feel like its better than all other anime models on civitai for both diapers and pants/bed wetting content, and it can do messy too. I am about to update it with an even better version 2. I will be posting a realism version called pomello soon, but it might not be so good I can confidently proclaim its "better than diaper ai v2" at diapers. It might be better at messy diapers or leaking diapers, or better than literally any realism model at wet pants (because so far civitai lacks any at all really) but diaper ai is, I have to admit, very good at dry diapers that look photoreal and with a variety of styles and poses. ABDL model on civitai is also good for realistic diapers. I posted above the lora that does messy well in a megaupload link. there are loras for photoreal diapers like whitediaper, and loras for pants wetting and peeing like PeeingThemselvesV2 (good for anime or realism) peeing woman (good for realism and not pee stains) and pee stain panties or something like that is a mostly anime focused lora for pee stains that can work for realism as well. But the only dedicated diaper anime model with much flexibility and quality other than Yuzu on civitai is Amina, which you can only find by searching it by name for some reason. IMO mines better by a wide margin and can be modified with loras more easily. besides all of that, read my post here for good settings and other plugins to use:>>26548 good luck anon
p.s. to that last post, if you want to do img2img of yourself, you can maybe use any anime checkpoint and keep the denoise below 0.5 and the prompt should be written similar to what my linked post says, a prompt that preferably generates a diaper image in the first place at least. hunt for seeds that make that image anime style as needed but the lower the denoise the more it will just keep the image as is. so too low and it won't turn anime. too high and it will turn diapers to panties or generate some other image entirely. Yuzu is probably good for this img2img task from what others told me but I have het to try that much TBH.
>>26825 >>26841 >>26842 Thank you anons, I will try your suggestions except the LoRA one. I googled what it was and teaching the AI seems to be extremely time consuming and unfortunately, I live a busy life and can't sink that much time into this even if I do want to.
>>26853 both of us are talking about using already trained loras not training a new one. if you saw stuff about how to train one thats like several levels deeper and yes much more involved than simply using one, same as the models themselves. You want to use them, there are tons of them already trained for stuff and they are tens or hundreds of mb not multiple gigabytes like a checkpoint. You also can't use two checkpoints at once but can use multiple loras, so they are very useful to make sure a checkpoint is nailing a concept when you start tweaking a prompt or then seed search further. training is not necessary to have a lora for a character or concept if someone else already did it and shared it anon.
>>26841 I am using the Diaper AI 2 model off of Civtai. I am trying to get some crawling pictures made but the model always generates them crawling facing away from the camera, any tips on how to get them to turn around so we are looking at the face/head instead of the butt.
>>26549 gf LoRa anon here. I tried your various tips and while I'm nowhere near finished, just wanna share that I'm seeing an improvement. A single, named tag definitely seem to have effect. I also tried playing with the learning rate. 5e-7 was definitely too low as it clearly underfitted, but I found a somewhat sweet spot around 1e-4 , 2e-4. I'm still refining it atm, in term of epoch I'm working with around 20 and find generaly the 15 to 18th epochs to be the best. More than 20 will probably overcook the lor, less than 10 is clearly insufficient and underfit. If this could be an interest to anyone willing to try to train a person LoRa, my two cents.
Hi, I am sheo, and I am leaving from AI art public community. Big thanks to all buddies who contributed to this community. Especially: lifania, you did an awesome job, thank you for DiaperAI. AiFog, thank you for your Pacifier lora. A few other anons, who trained their models and released it here. = TL;DR === I don't want to take all my tips, tricks and workflows to the coffin. If you're interested in this or you are a beginner, but know a "base" of generating images with stable diffusion, you can read this "mini guide" and maybe you'll start doing more awesome works! === TBH I wanted to make a full beginner friendly guide (video, voiceover and screencapture) back in the days of the collab before leaving for good. Just about when I was planning to make a guide, Google decided to shut down all stable diffusion notebooks for colab free tier users. But I changed my mind about making a guide, even if I had the opportunity, I definitely would not have made it under my main pseudonym. It would oversaturate the abdl art niche with bad work, and also traditional artists will start to hate me. I don't want that, I want to be a normal ABDL without any questionable claims or shit like this. This is one of the reasons why I'm quitting. = *cleans throat* uhm. I used DiaperAI model (anime version). It's free and you can find it on civitai. My newest workflow for ARTS (that I used for new works that published on my DA and Pixiv): I have used Anything-4.5 model to do characters, inpainted hands with Grapefruit 2 model (yep, it can do better hands and anatomy, but I didn't like the style) and inpainted diapers with DiaperAIAnime model. In most cases my workflow is going like: 1. Anything45 2. Grapefruit (fix hands) 3. Photoshop (preprocessing, like adding PNG's with real life objects) 4. Anything 45 again (inpainting PNG's with noise 0.4 or something, to "convert" real life objects to a drawn ones) 5. DiaperAI (inpainting diapers) 6. Anything45 Upscaling with img2img upscale (Ultimate upscale script or something like that, don't remember), prescaler is RealESRGAN+Anime6B, mostly high noise values like 0.2-0.4, it can give more artifacts, but you can get many more details. 7. Inpainting out artifacts that upscaler can give 8. Postprocessing in Photoshop, handfixing in some places. = For my most recent works (REALISTIC ones) that I published on twitter only (they are deleted from public, minutes after I published message about end of my AI arts on DA and Pixiv) I used this workflow: 1. Lyriel 1.5 (still haven't tried 1.6) + good prompt + hires fix 2x res 0.52 noise. 1.1. Cropping a 512x512 chunk(s) from a 1024x1536 (or like that) picture to inpaint it without OutOfMemory errors. 2. Lyriel 1.5, inpainting (lots). Fixing hands and etc. 3. Do a 1.1 step 3.1. Photoshop preprocessing. Somewhere somehow I got a pack of 300+ PNG diapers, that I can put on characters as a sketch, to guide a model how to properly inpaint it. 3.2. DiaperAI 2, diaper inpainting, fixing issues with inpainted diaper and etc. Prompt white diapers if you want to add a pattern in post. 4. Upscaling with img2img (not using any scripts). Set a dimensions to your_pic_dimensions * 2 or something like that, set mode to "Fit (Latent upscale)" (don't remember an actual name of this radio button), set noise to something like 0.3-0.45 (it can give some artifacts, but it will give you A LOT OF DETAILS, especially on fur or hairs).
[Expand Post]5. Do a 1.1 step on artifacts (if any) 5.1. Inpaint artifacts. 6. Photoshop postprocessing (like adding real pattern prints on diapers. Use multiply option and warp it using earp transform) If you know what you're doing you can have awesome results. I have even tricked some of my friends with this. ==== Tips and tricks: = 1 = If you inpainted something and it's bad — "roll" again. If you "rolled" more than 50 times and it still bad — change the prompt. If you changed the prompt enough and results is still bad — change the model. If you can't fing a good enough model — train a one for yourself. It's even can be considered as a good advice for life at all, lol. = 2 = You can do "Realistic" soggy diapers in Photoshop or other image manipulation software. Because often you can't prompt a soggy diaper in DiaperAI and get a good result. Yellow or orange colour, coal or hard pensil brush from random free pack (lol, I have a soggy and dirty .psd assets, that I made after a few arts with sogginess. It can save you a lot of time if you like me and want to publish garbage works few times a day. Nobody will ever notice a pattern that is the same on every my work :D), masks to give it a fade, multiply mode, warp transform. It will be easier if you previously seen a lot of high quality pics of soggy diapers :)
(1.60 MB 1944x2592 thankyou.jpg)

(371.56 KB 512x512 00012-806177479.png)

Big thanks everybody helping me here, I am finding improvements but it is still not quite like this level yet >>26292 But with your suggestions, I first started to generate more and more satisfactory results. I changed my prompt a little and ended up with this result. It is still not getting the pose quite right and lowering the denoise makes the colour leak and odd limbs start to appear. from this post: >>26548 there is a lot I still dont understand, like the highres thingy. I found something called SD upscale and the option for R-ESRGAN 4x+ Anime6B but when I selected that it generated like a collage of 108 separate images and it took 2h to generate. I can't find the options for Highres or Clip skip, Hires upscale, highres steps, etc... I am understanding more the more I am reading and re-reading and also trial and error. Everybodies advice has been stellar, I am happy I found this place and I will stick around for a while. Thank you again <3
>>26880 Oops, Duplication. Im sorry
(1.14 MB 1280x1280 00086.png)

(500.16 KB 512x768 00038-1196698123.png)

>>26883 Try using controlnet and messing with the canny settings, the metadata should be saved into my image so you can dump it into png info and look at it
(1.79 MB 1024x1536 01302-2100475239.png)

(2.04 MB 1024x1536 01228-2461929362.png)

(1.83 MB 1024x1536 01320-344529102.png)

(1.65 MB 1024x1536 01304-1662925961.png)

(1.61 MB 1024x1536 01304-3562677752.png)

>>26880 interesting in depth post to be sure. I will have to try out what this person has shared too. I might also post some workflow stuff for really nice final outputs once I work out a good workflow too. I have been training/merging models more than generating final images for a while but now I think my models are as good as they can get without another hardcore pass of training for the moment, so I will be tryin to workflows solved too. Just perusing this though, it sounds legit because I know for a fact grapefruit was something I merged into what I started training from to get reliably good hands in the yuzu model before it was trained on diapers and omorashi. my thoughts on this workflow prior to really trying it out much are: Krita plugin and mini paint / hakuimg might make inpainting better and control net and after detailer might make it less necessary too. as these images show, OmoOmuPomello isn't perfectly photoreal, I expect to use img2img to get some better photographic looking end results. >>26883 maybe highres fix isn't a thing in your generator, I am using automatic1111 and hires fix is a checkbox that will show the other high res settings, if you found "SD upscale and the option for R-ESRGAN 4x+ Anime6B" that might have been in img2img mode, I am talking about using it in txt2img mode. They are not the same in that regard, and you might have been generating tiled images or a batch? idk >>26861 I second the guy saying controlnet, if you want to control poses well you are either going to need controlnet or some sort of img2img approach, but you can at least try tags like "facing viewer, on all 4s, looking at the viewer, face, front view" etc Yuzu model about to get v2 posted, here is the first version of my Pomello model, the realism version. Also attached are some sample images. https://civitai.com/models/95360?modelVersionId=101785 heres a full listing of my tags used in training that might work, I omitted the messy ones mostly from civitai to keep it from getting deleted, because its not even amazing at pants pooping anyway. having an accident, implied accident, have to pee, potty dance, diaper check, diaper changing, diaper bedwetting, sleeping, wet furniture, pee, peeing self, peeing, pee stream, pee running down legs, pooling pee, peegasm, diaper, dry diaper, wet diaper, messed diaper, diaper under clothes, diaper under pantyhose, diaper waistband peek, diaper peek, diaper pull, pacifier, holding pacifier, pacifier in mouth, pacifier clip, paci , onesie, onesie butt flap, onesie crotch buttons, pov diaper change, wetness indicator, faded wetness indicator, pullups, tape ons, holding diaper, open diaper, medical diaper, ABDL diaper, goodnites, baby diaper, cloth diaper, diaper cover, training pants, soggy diaper, leaking diaper, thick diaper, hypermessing, faded pattern, patterned diaper, cupcake patterned diaper, bunnyhops patterned diaper, abena m4 patterned diaper, floral patterned diaper, strawberry patterned diaper, space patterned diaper, dino patterned diaper, bear patterned diaper, lion patterned diaper, skull patterned diaper, white diaper, pink diaper, lavender diaper, purple diaper, blue diaper, cyan diaper, black diaper, baby wipes, wet pants, wet shorts, wet skirt, from behind, bed sheet, bedroom, pout, blush, scared, trembling, embarrassed, sleepwear, public urination, pooping self, pooping, butt bulge, have to poop, shit stain, shit puddle, visible shit, poop peek, shit running down legs
>>26919 Already down. Sadge.
>>26922 Yeah... false flagged as underage but I appealed and we will see. Yuzu is updated too, but I will post both in a megaupload quick... might have to use hugging face for this type of content
>>26923 hugging face mirror: https://huggingface.co/ZackRobotHeart/OmorashiOmutsuPomello/tree/main mega with both models once the yuzu2 finishes uploading, plus the messy lora https://mega.nz/fm/tVlk2CbC
>>26927 I think your mega link is just a link to your own folder, not a sharable link (try it in a private window)
>>26919 >maybe highres fix isn't a thing in your generator, I am using automatic1111 and hires fix is a checkbox that will show the other high res settings I am using the same generator. The thing is that I have only worked with the img2img option. I haven't even touched the text2img yet. I just downloaded controlnet safetensors and 2 of its models. I will see how it goes!
>>26928 my b u right I don't usually use mega.io https://mega.nz/folder/MZtyBArL#ioJQUa0v7hC6-ar668Gx_g
>>26934 being able to get good results out of txt2img will help you get better results out of img2img because the prompt still matters. the reason you have to work in a fairly constrained range of denoise settings based on your prior posts is that you need it to "do like this pic" but also change enough to "not *literally* this pic tho." When you are working with an image that was generated by sd in the first place, the same prompt and or seed and model will inherently produce similar images in img2img even at very high denoise, or for that matter a prompt that reliably makes that image in one model can be used to make that image but realistic or anime in a different model when using img2img on something that was made in txt2img. If you just want to mostly do inpainting to edit diapers onto stuff tho, I recommend inpaint masking mode or the minipainter plugin for that kind of task.
(2.91 MB 1488x1664 00010-3869152501.png)

(2.85 MB 1488x1664 00012-1927616623.png)

(2.78 MB 1488x1664 00015-3869152501.png)

control net is very powerful
>>26935 Based. Can't wait to test it.
(177.10 KB 1024x2096 Aifog-Wip.jpg)

>>26941 that looks good, try reference+attention Here's a work a in process of mine,
>>26880 you're welcome, thanks for the tips
Pomello model back up on civitai
The new SDXL, the new Stable Diffusion that Stability AI is working on seems pretty good. Can't wait for people to start to fine-tune it on abdl content as soon as it gets released. If you are interested you can test on Stable Foundation discord channel, although NSFW content is blocked.
This looks great, but i have tried the DiaperAI model and whilst it gives great results, they are all in Anime style, which don't work for me. Anyone got a link to a model based ONLY on real images / photorealistic?
>>27144 The diaperAI v2 work pretty well with photo realistic. Don’t hesitate to enforce your prompt with positive word like photo or realistic and put cartoon,manga,anime in negative
>>27144 My guy, it has 7 versions, only 2 of those are not photographic.
>>27144 diaper AIv2 and most other models such as abdl, pomello, etc are trained on photos but not exclusively, and usually started from illustration because tagging is so much better, so its easier to get them going. You just need to learn how to prompt them to NOT do anime, or apply some loras to make them more real, or run their generated images through another model in img2img to make them more photographic. This shit isn't as magically zero effort as the haters make it out to be, you have to learn what actually works because no model is 100% never illustrative.
I just want to say thank you guys for posting your experiences, tips, and guides. I thought I'd share some of the successes I've gotten over the past week or so. I'm running a fairly underpowered gpu so this is probably the limit of what I can make quality wise, and I don't have access to control net.
Now I pray for a good pants messing LORA/Model
>>27171 >>26941 SO awesome, im amazed by the quality. I have been using different models, downloaded an abdl lora but havent trained one yet myself. I JUST figured out how to use the controlnet and from all the models, the abdl model from civitai works best right now on stable diffusion. DiaperAI and OMOYuzuv10 and v20 are doing good but doesnt get the poses quite right yet. I still need to tinker with the settings to get results like these. I really like the details, the eyes and the light on all of these. >>27171 >>26941 >>26532
>>27288 I'm pretty much depending on Anythingv3 which gives it the nice anime effects. ABDL lora works wonders for diapers themselves. Generally I have been having luck using img2img to control the posing and feeding it either a drawing I like or one of my own.
I started to use AI to just simply enhance my ugly mug now. I did use gimp first but man, AI does wonders. Specially with makeup that I struggle so badly. Not giving up on the anime versions of my poses just yet but I am going to put it to the side for now
>SDXL 0.9 download: https://pastebin.com/b8kXrkAF
ive prompted a few things here and there some better then others most better then the actual abominations of twisted limbs generated trying to replicate past success (note the 5 same-ish ones were testing out different sampling methods)
(2.61 MB 1424x1424 00029-3626138248.png)

(2.55 MB 1568x1576 00031-2766410187.png)

(2.78 MB 1568x1576 00036-2702743234.png)

(2.03 MB 1568x1576 00043-2072738360-ad-before.png)

(2.53 MB 1352x1728 00105-308333993-ad-before.png)

some more totally didn't mess up uploading them noo never
(2.13 MB 1352x1728 00109-3603242253.png)

(2.75 MB 1568x1576 00137-2289377157.png)

(2.67 MB 1352x1728 00107-2477880233-ad-before.png)

the final 3
How was the model yielding the last amazing results trained? Was it SDXL? Checkpoint/Lora? How do you get this animeish photorealistic look since there probably aren't many images like that to train it? And what's in the dataset roughly would be nice to know... The more info is shared, the faster a perfect ABDL model can be created!
>>27557 Can you get ai to go even larger on the diapers
>>27561 it was not trained just mixs of lora's and some select checkpoints i forget exactly what prompts i used at the moment but when i get the time ill go through them and get the gen details i will say control net played a big part >>27578 with some tinkering probably but i usually encounter errors no 27556 middle pic 0036 is the biggest I've gotten it to go that's not completely wonky
(1.17 MB 1024x1280 00015-578383550.png)

(1.20 MB 1024x1280 00058-3363481342.png)

(1.41 MB 1024x1280 00042-75188950.png)

(1.23 MB 1024x1280 00062-2884211280.png)

SilksketchV1 with Reverse Smoothstep 4 blockmerge on ManmarumixV2 gives some fine looking peeps
>>27307 Obvious case of over fitting here, pic related is the original
>>27693 Literally none of the elements of that illustration is present except a yellow background. If it was overfit the pixels would be crushed as it tried to look exactly like the image and the actual elements and look of the image would be trying to force itself to be visible. The model would also keep trying to replicate that image over and over again with different prompts and seeds. It's just a laying down pose and a yellow background, that's not overfitting at all, congratulations seeing two images that has similarities.
(3.00 MB 1600x1600 00052-1321108073.png)

(2.77 MB 1600x1600 00057-2265776782.png)

(2.64 MB 1600x1600 00053-1831280231.png)

(2.95 MB 1600x1600 00059-811699197.png)

(3.31 MB 1600x1600 00055-2484175902.png)

a small collection made via merge block weighting calicomixreal_v20.safetensors first OmoOmuPomelloV1fp16Pruned.second under R smoothstep /
>>27730 Looking pretty good. Now try for messier, and maybe yogurt dribbles
>>27730 do you think we'll ever be able to get rid of the Plasticine look that ai images have? maybe could run the created image through a filter or something
>>27745 Take a look at the skin on the back of your hand, like really closely take out a magnifying glass if you got one. Then go to a mirror and look at your face, really focus on the features of your face. After this take a mirror selfie at about your own heigh away. Then look at that picture. Consider what you saw when you looked at the back of your hand. You saw greases, lines, pores, wear and tear, stretch. The tone is slightly different, the texture varies slightly. Maybe you got few small scars or blemishes from something. Then think about what your face had. You could see pores of your skin, oils and greases that naturally exist, greases and lines from your expressions and how your skull carries your face. Then just look at that picture. Those details aren't really fitting in there. However the total compound effect of them affect the way you look in that picture. Right how is what I just said relevant? The way these AI systems work - as in all of the ones we have right now - is by statistical analysis. What ever answer you get or image you generate, is an average calculated from the model. It is a mathematical solution. And in most cases unless random components are added in (and in some cases even regardless) it is deterministic. As in same inputs will always get you the same output. What the current technology of image generation is trying to do is basically solve a mathematical problem of: (Sum of tokens from the prompt you gave it) - (Sum of tokens from interrogating the latent space) ≈ 0. So you wonder "how the fuck does these things connect together". In that selfie, what made you look like what you do, and the image feel "realistic" or "normal" are the imperfections. These models can't have "imperfections" in the because they are average representation of the data it was trained on. The skin texture you see, is an average of all skin textures it has "seen" in training. This is why most models only have like 10 different "people" per description. As in 10 different kinds of men, 10 different kinds of boys, 10 different kinds of african girls. You start to see same faces and same bodies very quickly as you use a model for a long period. These 10 different variations exist not because of imperfections, but because of the model structure. Something filed a slightly different version of that thing, near the thing you prompted for, but it didn't get averaged in to same point as the other. So you get a cluster, and as you travel towards the edge of the cluster you at some arbitary point just jump to the other (like say: "Black Cat" and "Black Dog". This is because during the generation process as it does the averaging, at some point a rounding of the finite precision it can have happens to lead it towards the other cluster. With some fiddling you might able to land partially somewhere between the clusters and get something that is "black dog black cat" as in when you take it's value it is equally black cat and black dog. Generally rounding prevents this, but depending on the model the vague zones can be big or large. If you train or finetune a model with perfect clear pictures without any faults or noise. You get this very 3D sterile uncanny valley effect that 3D artists have spent about 30 years trying to get rid of. This is because when you do 3D rendering with raytracing, you get perfect mathematical light. Problems is that reality is not perfect. The image generation systems currently fundamentally struggle with things that are just slightly something. If there is an sharp edge, it will try to define that sharper. If there is something blurry, it will blur it out more to a sort of gradient. If there is something noisy, it will process that until there is no noise or alternatively it will average that noise out in to a gradient of noise. This is also because of the training methods used. Many models are still notorious for not being able to make dark and black things. Evertyhing is always grey or somewhat lit. Until someone figured out that the problem was that during training the extremes of the RGB values got clipped. This lead to extremes averaging down. So your perfect black 0,0,0 became 10,10,10 or 10,0,0 if there happened to be red nearby. While your whites was only preserved if there was something near which was different enough. The AI disgarded low value information because they just ended up being rounded out in the averaging process.
>>27748 qrd?
(588.15 KB 640x768 00058-2321027274.png)

(1.37 MB 1024x1280 00116-2594013479.png)

(1.38 MB 1024x1280 00120-304542587.png)

(2.05 MB 1296x1400 00036-3569962600.png)

(1.53 MB 1024x1024 00158-2196286140.png)

>>27749 AI works on averaging the data it gets, so it will more likely return the smooth average of skin, rather than the bumpy deviations of skin. If you overlay hundreds of faces on top of eachother all the specific imperfections of each one goes away and you're left with a perfect smooth face. But you can always train it to give you the results you're after. Like the "Humans" checkpoint civitai.com/models/98755. You could overblow it and smoothen out a crunched result with hires fix. Overload it with loras to get a crunchier more realistic and artifacted look but also lose even more control. etc.
(1.69 MB 1024x1280 00157-993449578.png)

(1.48 MB 1024x1280 00194-179203052.png)

(1.78 MB 1024x1280 00205-2247077846.png)

(1.51 MB 1024x1280 00133-1027290066.png)

(1.76 MB 1024x1280 00247-1214870289.png)

>>27723 Well someone sure got a bit worked up there :)
>>27759 model?
>>27760 You cause you're wrong?
>>27766 > no u lol
(8.64 MB 351x276 IMG_8021.gif)

I guess this is the thread for this? DiaperedXtreme had this on tumblr, I think he had more but are gone
>>27730 That's so cute. Could you make some photos of them, with this same age, but wearing white tights and baby clothes?
(952.17 KB 1272x500 IT_ISNT_OVERFITTING_MOOOOOM.png)

>>27693 Careful anon, "over fitting" is a major trigger word in these threads because AIncels want to believe it isn't theft and plagiarism so long as their glorified image search/compositing/filtering is sufficiently obfuscated. >>27723 > NOOOOOOO it isn't over fitting unless it looks EXACTLY like the original! Cope
Don't think I've seen an AI art with locking plastic pants, diaper covers or those magical locking tapes you see sometimes. Anyone tinkering with AI know why that is? Not enough pictures to train for it? Not distinct enough?
>>27843 It's probably because there aren't enough diverse high quality images to train from. If you can find a few hundred I'm sure someone could generate a lora for you.
I kinda want to make ads with some of these.
(1.34 MB 1024x1024 SDLXL-Paci.png)

Spent sometime making an SDXL version of my pacifier lora, needs to be redone before i consider uploading it. 46 images and 480 steps,Did it in koya. still playing around with SDXL,
>>28127 What hardware are you using for sdxl? Using my 3070 I am not the happiest with the speed it takes to generate an image. Nice to know that kohya works to train though.
Idk here’s some of mine
>>28269 I find it funny that the zipper went on the short holes and the dick got the smooth hole
(2.26 MB 1408x1024 animeArtDiffusionXLAlpha3.png)

>>28165 Yea, I have a 4090, retrained a bunch of my old stuff onto SDXL for experimentation. >>28269 What causes that almost filmgrain effect is that the upscaling?
>>27841 >Careful anon, "over fitting" is a major trigger thank god i will never be this retarded
(381.48 KB 512x512 00017-212535177.png)

(298.33 KB 512x512 00004-1600403859.png)

(336.01 KB 512x512 00010-824250856.png)

(376.22 KB 512x512 00016-4262411530.png)

(319.19 KB 512x512 00003-4240354805.png)

finally said fuck it and installed this shit, Installing it was a bitch because my python version was 3 years out of date. Didn't turn out great but better than I expected for just bullshitting the prompt and with the parameters set low enough to generate in around a minute on a mid-end laptop from 3 years ago with other shit running in the background.
(212.07 KB 512x512 00019-2639895448.png)

(314.58 KB 512x512 00007-2821160249.png)

(336.72 KB 512x512 00004-278313573.png)

(292.04 KB 512x512 00034-1256875607.png)

(371.13 KB 512x512 00063-2169845461.png)

>>28324 tried some more
>>28317 You are not exploiting technology you did not make... To do things for you that you do not know how to do.... Because of what a smart boy you are....
Can any of you guys generate a maid wearing a diaper while baking?
(4.41 MB 4781x3000 IMG_0934.jpeg)

Wondering if some AI-god can generate a set based on this?
Message to Azusa, assuming he's in this thread: The new model used last month was kinda a waste of time, and honestly the new model you're working with now isn't looking much better. I don't know if it's because you're especially fond of elaborate background elements or what, but none of what you've been doing is really conferring value to your audience. More complicated or dramatic shading makes the images less well composed and harder to "read", it's the reason why bad AI is to easy spot from across the room just by the thumbnail alone. Worse, what the new models are emulating is more cheap digital rendering tricks that were mostly in fashion over 10 years ago (dodge, burn, bloom, etc. How about throw in a lens flare why don't you?). Really, I like style stuff like >>22928 has more than anything. Clear, simple, easy to read and expressive. More about the scenario and character expressions than shiny rendering. That's what is valuable in art. PS It'd be nice to see a straight jacket pic where they aren't gagged/pacified, just a change of pace.
>>26292 >>26532 REALLY loving these ones. Best diaper AI art I have ever seen. More in this style, please!
(198.98 KB 1024x1536 F2w-lGlWAAAc93q.jpg)

(3.82 MB 1536x2048 Laintests (1).png)

(1.19 MB 832x1024 Laintests (3).png)

(1.18 MB 768x1024 Laintests (2).png)

(3.73 MB 1536x2048 Lain.png)

(3.35 MB 1536x2048 Laintests (5).png)

>>28805 Even though it can't get the tapes quite right, this is awesome
https://www.pixiv.net/en/users/35359563 This person has some of the best looking diaper AI art I've seen by far, wonder where they got their training data from since I don't really recognize the artstyle of the diapers.
>>29006 amazing how it can make unique things like diaper bulge onesies so well, yet nearly every hand is fucked up
>>29006 The omupan style diapers definitely make me suspect there's some Yah-Yah-Doh in the mix (who I know has content on DLsite that still has never been circulated, VN rips and things) but I can't be sure about the tape style diapers >>29075 Hands are hard, any artists will tell you.
(75.33 KB 512x768 1.jpg)

(76.88 KB 512x768 2.jpg)

(77.99 KB 512x768 3.jpg)

(98.12 KB 512x768 4.jpg)

(84.83 KB 512x768 5.jpg)

hmph, i've got really fun pullups lora
(73.00 KB 512x768 7.jpg)

(68.20 KB 512x768 8.jpg)

(78.15 KB 512x768 6.jpg)

>>29102 >>29103 Gag, can we make a separate thread for furries so I don't have to see these dog-faced (or other animals) bitches.
(44.42 KB 576x512 6.jpg)

(44.58 KB 576x512 1.jpg)

(43.27 KB 576x512 2.jpg)

(52.13 KB 576x512 3.jpg)

(75.13 KB 576x512 5.jpg)

>>29105 good good let the furry flow through you
(767.45 KB 1229x1690 F4ebhyMW0AAlG-7.jpg)

>>29105 we did problem is that >>29102 >>29103 >>29107 is too lazy to to use the power of scroll down
With the new sdxl model gaining popularity,is there anyone already trying to train a diaper model based on it? The basic model seem to be pretty good at understanding prompts.
(4.87 MB 2048x2048 JuggernuatSDXL3.png)

>>29426 I might at some point, mostly just been playing around too much with my custom lora's
Was AiFogs pixiv deleted?
>>29463 i got suspended, waiting to hear back from support.
>>29488 Are you going to be reinstated, or are there other places to find your stuff? There were a lot of great things in your works :(
>>29855 no clue, still radio silence from them, i messed up and posted Gore/violence and didn't realize i had marked it as R-18 not R-18G here's everything https://mega.nz/folder/hONWDSTJ#vkEk_EKVegY5otycr0KKuw Uploaded by oldjack # have to register https://www.shireyishunjian.com/main/forum.php?mod=viewthread&tid=249167 https://pixeldrain.com/u/tsCG6Bpc (password is oldjack)
(446.25 KB 676x676 OIG.png)

(384.82 KB 676x676 OIG.png)

(444.96 KB 676x676 OIG.png)

(531.40 KB 676x676 SPOILER_OIG.png)

new dall-e 3 is available on the bing image creator, pretty powerful but takes a bit of work around, doesn't do explicit stuff, couldn't get it to do any in diapers, but A+ for doing more AB stuff, caption artists are gonna have a field day with this I think
>>29979 Thanks! Still hope you get your account back :)
(84.62 KB 1024x1024 OIG.jpg)

(136.75 KB 1024x1024 OIG.nnfF.jpg)

(118.00 KB 1024x1024 OIG.jpg)

(118.97 KB 1024x1024 IMG-20231003-WA0009.jpg)

>>30075 Managed to get some good results after a bit of workaround Bing's filters
>>27555 WOW these are amazing!!!
(139.73 KB 1024x1024 OIG (6).jpg)

(174.51 KB 1024x1024 OIG (7).jpg)

(164.18 KB 1024x1024 OIG.LpCeZ7rsmDuyP.jpg)

These were made with DALLE 3
(130.73 KB 1024x1024 OIG (8).jpg)

(168.10 KB 1024x1024 OIG (9).jpg)

(134.96 KB 1024x1024 OIG (10).jpg)

(144.71 KB 1024x1024 OIG (11).jpg)

(160.89 KB 1024x1024 OIG (1).jpg)

(175.51 KB 1024x1024 OIG (2).jpg)

(145.01 KB 1024x1024 OIG (4).jpg)

(170.11 KB 1024x1024 OIG (3).jpg)

(171.29 KB 1024x1024 OIG (5).jpg)

Can we get some more of these? AIFog are you still around? I miss your pixie!
+18,Diaper,Mess,Girl,Bondage
diaper girl wetting blushing
(1.36 MB 2125x4942 4.jpg)

For the input images in training a lora, should it just be the diaper itself with everything else removed? Should it be a full body every time, face censored, background removed? There's so many conflicting guides and I have yet to get anything that I'd be willing to publish.
hi can you be my mommy
Hey there I'm looking for some generated pics of a couple anime characters in diapers? Could someone try their hand at making Raphtalia from rising of the shield hero? Also Kanna from Miss kyoboshi's dragon maid?
a big pacifier
a 20years old girl with a stinky diaper
diaper, small tits, brown hair, no shirt, pee
a woman wearing a diaper


Forms
Delete
Report
Quick Reply