>>5338
Thanks anon.
>In the second pic, is that meant to be worn that way, or is it a nip slip?
I imagine it's either a slutty tube-like leotard or it's some kind of curtain that she's pulling down with her legs to reveal her nipples. Either way, that expression should tell you she's doing it on purpose.
> In the third pic, the details on the hand might be too realistic. There's a reason even the more highly detailed anime style artists practically never draw detail on the knees, elbows, and knuckles of girls. I can say the same about the wrist of the last girl visible in the bottom left corner. Realistic wrinkling in these areas just isn't very attractive to most people who like the smooth clear skin of cartoon women.
I can sort of see it with the wrist. After spending so much time fixing hands, removing detail from a hand goes against my instinct, but I'll keep that in mind.
>>5385
I'm happy to see more people getting into it, those are some really nice pics.
>My favorite images from this thread is the second image from >>4335 and the second image from >>4833 Wow, goddamn gorgeous. What were the prompts, setups, and processes for those? o=
Thanks anon.
The first pic is pretty old, I generated that one with NovelAI and only inpainted it with my local SD. I don't have the prompt anymore but I'm pretty sure it had heavy emphasis on Boris Vallejo and Julie Bell.
The second pic is newer but I also didn't save the prompt. In that case I'm pretty sure I used the cardos animated model and the "painterly" style preset (posted below).
> Any critiques? So far, I don't understand inpainting; it seems whatever the software newly generates never matches the underlying image.
The obvious one is that the pics aren't inpainted, so they tend to lack detail in the eyes.
The basic thing with inpainting is to check the "only masked" option, use the same prompt as the base image (you can alter the prompt slightly to get specific things in the area you're inpainting), and generally use low denoising strength in the 0.25-0.45 range.
Another important trick I've learned with inpainting is something I like to call "context points" which I try to explain in the last pic here. Basically, when inpainting only the masked area, the AI will grab a piece of your pic, upscale it, generate it again and then downscale it back into the pic. The problem is that the AI will only "see" the little square that it's inpainting and it may lack the context it needs to understand what the pic is supposed to be, so what I like to do is add small dots away from the main masked area to increase the size of the area that the AI can "see" during inpainting. A typical example of this is inpainting a hand in a weird position; by just masking the hand itself, they AI won't be able to tell that it's supposed to be a hand, so you can add a small dot on the elbow and now the AI will expand the inpainting canvas so it can see the whole forearm and it will be much more likely to understand that it's supposed to paint a hand.
Anyway, since you're probably using my old styles.csv that I posted a while ago, here's my updated file that I've been using lately with more tweaks and experiments.
name,prompt,negative_prompt
Pulp,"(Boris Vallejo, Bob Byerley, Yusuke Murata, Frank Frazetta:1.3)",
Pulp Variant,"(Boris Vallejo, Julie Bell, Robert Mcginnis, Peter Andrew Jones:1.3)","(depth_of_field, blurry, bokeh:1.2)"
Space Pulp,"(Terese Nielsen, Ken Kelly, Jim Warren, Mark Brooks:1.3)",
Sci-Fi Landscape,"(Bob Eggleton,Paul Lehr, Barclay Shaw, Peter Elson:1.4)",
Inky,"(Philippe Caza, Adam Hughes, Alice Pasquini, Greg Staples:1.3)",
Cute Grill,"(Bob Byerley:1.4), (Albert Lynch:1.4), (Sophie Gengembre Anderson:1.3), (Ralph Horsley:1.3)",
Painterly,"(Alexandre Cabanel, Jean-Baptiste Monge, Albert Lynch:1.3)",
Faun,"(Adonna Khare, Ed Binkley, Jean Veber, Kentaro Miura:1.4), (Junji Ito:1.2), (black_and_white, greyscale:1.2)",
Fairy,"(Aaron Jasinski, Cyril Rolando, Agostino Arrivabene:1.3), (colorful:1.4)",
Ornate,"(Bob Byerley, Sophie Gengembre Anderson, Aaron Horkey, Luis Royo:1.4)",
Wispy,"(Tomasz Alen Kopera, Agostino Arrivabene, Neil Blevins:1.3), (Luis Royo:1.4)",
Fairy Mix,"(Cyril Rolando, Agostino Arrivabene:1.3), (Luis Royo, Bob Byerley:1.4)",
Anime Fuzzy,"(William Holman Hunt, Katsuhiro Otomo, John William Waterhouse:1.2),(Rumiko Takahashi:1.3), (big_hair)",
Sci-Fi,"(Wadim Kashin, Stephan Martiniere:1.4),(intricate:1.2),(Luis Royo, Emilia Wilk:1.2)",
MAF,"(Greg Staples, Luis Royo, Stephen Hickman, Terese Nielsen, Kentaro Miura:1.3)",
Gachashit,"(Peter Mohrbacher, Raymond Swanland, Tommy Arnold, Aleksi Briclot:1.4)",
High Detail,<lora:add_detail:1>,
Hyper Detail,<lora:add_detail:2>,
Low Detail,<lora:add_detail:-1>,
Furry Begone,,"(animal_ears, pointy_ears, horns, tail:1.4)"
On that note, there is one more tip of the A1111 UI: you can edit your ui-config.json file to include things like your preferred default pic size, base prompts, and all sorts of options that will be preloaded when you start the UI. I've found that this saves a lot of time. For example I have default base prompts and negative prompts that I use and inpainting options checked by default:
"txt2img/Prompt/value": "(little girl|loli:1.4), (flat_chest:1.3), nsfw",
"txt2img/Negative prompt/value": "bad_prompt_version2, (worst quality, low quality, pubic hair:1.4), <lora:easynegative:1>",
"txt2img/Sampling method/value": "DPM++ 2M Karras",
"txt2img/Sampling Steps/value": 28,
[Expand Post]
"txt2img/CFG Scale/value": 11.0,
"txt2img/Width/value": 512,
"txt2img/Height/value": 768,
"img2img/Masking mode/value": "Inpaint masked",
"img2img/Sampling Steps/value": 28,
"img2img/CFG Scale/value": 6.0,
"img2img/Denoising strength/value": 0.25,
"customscript/sd_upscale.py/img2img/Tile overlap/value": 128,
"customscript/sd_upscale.py/img2img/Scale Factor/value": 2.0,