>>16261
Not sure off the top of my head, I tend toward the glossier ones aZovya being my current favourite but consistentFactor, nexGenSuperModels, edgeOfRealism and level4 have proven pretty good too, hardblend occasionally knocks TIs out of the park better than all the others too, best to just nab a slew of them and test them against each other with a few seeds. Pics related are decent TI training settings btw
Make sure all your shots are face on (or near enough) they don't have to be amazingly clear just as long as you have about 15/20 you should be ok, even with 14 shoddy as hell ones I managed to get a girl near photoreal so it's very doable. You can include bodyshots but mostly you want headshots. Crop them to equal dimensions so they can be easily up/downscaled to 512x512 in the Extras tab. (I use 4x_Ultrasharp as the first upscaler and SwinIR_4x at 0.1 for the second) with GPFGAN set to 0.75)
Pic1:
The number of vectors per token relates to the number of training images you're using, around 20 images would justify 4 or 5 that usually all you need, otherwise it's roughly:
10-30pics 4-6
40-60pics 8-10
60-100pics 10-12
Initialization text really isn't needed.
I did a faulty TI with 18 headshot pics and using 8 vectors, way too much 'weight' sow while it looked decent it was hard to get past just her face showing up in results, even forcing (full body:1.8) could barely get past her tits, if in doubt I'd err on the side of lower.
Pic 2:
BLIP option adds txt files describing the images, that's yer dataset.
Pic 3:
MAKE SURE you use the 1.5 checkpoint to train on, few times I've accidentally had the wrong one on and made nightmares in the training snapshots which can be found in:
>webui/textual_inversion/images
Examples of what it's figured out/mapped will show up every 30 steps in there. The results in there can be OTT, very cartoony/fever dream-ish, (I'll post some examples in my next post), you just need to parse them for the best looking ones and try out their embeddings:
>webui/textual_inversion/embeddings
drag and drop in
>webui//embeddings
and refresh in the UI.
You can just set the Embedding rate to 0.004 and let it ride till it hits something you like but I use a gradual one that jumps the very early steps quickly then ramps down to sensible at 500. I've found most of the best TIs have hit at around 1500 to 3000 steps, though it can be earlier or 'a bit' later not seen any past 4000 that weren't overtrained.
The number of training shots should be divisible by the Batch size, usually 1 or 2 is best (a 3080 can't hack more than 2 at once)
For the Prompt Template don't use a default one:
>custom_subject_filewords.txt
Needs to be created and placed in:
>\webui\textual_inversion_templates
All it needs is:
>photo of a [name], [filewords]
That way it'll only try to make photos rather than renderings or paintings while training.