>>1906
I'm not an expert by any means, The main appeal for me is to make some cool or silly images of characters I like and generally just prompt whatever comes to mind.
I only recently started messing with img2img and I barely use it, so I can't really help you there. The same goes for inpainting since I rarely use it because
I'm lazy the few times I used it, I ran out of VRAM when fixing larger images and I don't see the point in fixing smaller ones, so I'll just prompt another image or something.
I'm going to assume that you have SD setup and ready to go.
First, look around civitai for a cool model that has a style you like, either anime or realistic (or both depending on model). You don't need a ton of models, but new ones are always showing up so you should test some new stuff every now and then if you like.
The model used to generate the previous 5 images and the one I've been using the most is "DarkSushi" (there are several, but I think the one I'm using is "DarkSushiMixMixColorful", which should be this one if I'm not mistaken: civitai.com/models/24779?modelVersionId=56071)
I also use 7th anime v3 (again, there are several and in this case I use the C variant) for more colorful images. You can find them here: huggingface.co/syaimu/7th_Layer
There is also revAnimated which I didn't use much but produces some cool results and leans towards realism. You can find it here: civitai.com/models/7371/rev-animated
I also have the AbyssOrangeMix ones which I use occasionally and are a mix between anime and realism depending on the variant. You can find them here: huggingface.co/WarriorMama777/OrangeMixs
There is also novelAI, which was all the rage back then but I don't use it anymore.
You can also mix models in the "Checkpoint Merger" tab, but I barely used it.
Second, look around civitai for a LORA of that character you like, a certain art style, specific poses, detail enhancers and other stuff. LORAs really help since generating some obscure characters or specific poses without them would probably be impossible or simply rage inducing. Depending on the model you use you might not even need a LORA for certain characters, such as, but not limited to, Miku, Fate girls, 2B, Dark Magician Girl, Tifa, etc.
It's also fun to see a certain character drawn in the style of an artist that would likely never ever draw that character, such as the Bloodborne hunter by Yoshitaka Amano,
Whether a LORA combines well with the model you are using and generates something good really depends on the model and LORA, so I can't help you there. For example, the models I listed previously (save for revAnimated) generate a very messy Raziel with the Raziel LORA.
You can also use multiple LORAs in the same prompt, but the results vary as well.
Third get a VAE. I can't really help you here since I'm using the VAE that came with novelAI and at the time I just followed a tutorial to get everything running. I'm still using it and stuff just werks™. I think people use this one, but don't quote me on this: huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors
With your model of choice in place and the LORAs ready, it's time to prompt. Here are some basic template prompts to get you started and keep in mind that different models will react differently to the prompt and give you different results. You should also make adjustments depending on what you are trying to prompt (trying to get a lovecraftian monster with "horror" in the negative prompt doesn't make much sense). Green is prompt, pink is negative prompt (just pick one of each and test for yourself).
>masterpiece, best quality, highest quality, cinematic lighting, highly detailed, sharp focus, digital painting, extremely detailed
>masterpiece, best quality, (colorful),(finely detailed beautiful eyes and detailed face),cinematic lighting,bust shot,extremely detailed, intricate
<lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name
<(worst quality:1.4), (low quality:1.4) , (monochrome:1.1)
<sketch, duplicate, ugly, huge eyes, text, logo, monochrome, worst face, (bad and mutated hands:1.3), (worst quality:2.0), (low quality:2.0), (blurry:2.0), geometry, bad_prompt, (bad hands), (missing fingers), multiple limbs, bad anatomy, (interlocked fingers:1.2), Ugly Fingers, (extra digit and hands and fingers and legs and arms:1.4),(deformed fingers:1.2), (long fingers:1.2),(bad-artist-anime), bad-artist, bad hand, extra legs
Some words to test for yourself:
pixie cut, off shoulder, comic/manga/magazine cover, panels, speech bubbles, question mark, hearts, navel, midriff, watercolor, skyline, reflection, shadow, profile, concept, blueprint and try "teeth" in negative prompt (so there is a better chance of the character keeping its mouth shut).
General tips:
>The labels have tooltips, so read them (Sampling steps, CFG, Sampling method, etc..)
>Different resolution == different resulting image. Don't think you will get a nice 1920x1080 wallpaper out of that 512x512 image simply because you changed resolution.
>Different sampling method also changes how the image looks but results vary (I tend to use DPM++ 2M Karras, Euler a and DDIM, but you should try them all to see which ones you like).
>If you are testing something it might be a good idea to prompt at a lower resolution first just to see if you get a decent result before you prompt at 4K UHD and melt your GPU.
>A very high CFG value will "cook" your image and make it look terrible (I usually keep it at 7 and go to 12 at times, but it depends on the model, so test it)
>The more sampling steps the longer it takes to generate the image (I usually keep it at 25, but if you get an image you like, test higher values all the way to 150 just to see what changes)
>If you got a cool image, upscale it ("Hires. fix" checkbox -> pick the upscaler -> select denoise value (the higher it is the more different the image will look when upscaled) -> select upscale value (don't go too crazy on this unless you have a good GPU, 1.5x or 2x max should be enough to get you something nice) -> don't forget to lock the current seed in place or you will be generating a new image.
>Read the instructions of the model you are using, since some models need more positive and negative prompts than others.
>Read the instructions on the LORA you are using, since some are as simple as "dante" while other may require "dante, devil may cry, red jacket, white hair, short hair, stylish, etc...."
>When starting SD, you can recover your last prompt data by clicking the little blue icon with a white arrow below the "generate" button instead of manually typing and changing everything every time.
>You should be able to use booru tags in most models
>When trying a model or LORA, check out the gallery and copy some prompts to both test the model/LORA and to use as a base for your stuff.
>You should know by now that some human beings can have more than 5 fingers in each hand and some even have extra limbs (inpaint helps deal with such mutations).
>There is an option to save your prompt to a text file, but you can also just drag and drop a prompted image into the "PNG Info" tab to see its info (assuming the metadata wasn't cleared).
I think this just about covers it. There's a lot more to this obviously, those template prompts are nothing compared to what some people write, but just go ahead a give it a try and it should be more than enough for now.