>>449
>>451
>CUDA trouble.
I don't use Windows, so the depth of my advice is limited.
Your first step is to trouble shoot your CUDA installation. Find some nvidia or third party tool that tests if CUDA works, and that it uses your GPU.
If CUDA isn't working from this test you'll need to fix your CUDA installation, and possible upgrade or reinstall your nvidia drivers.
If CUDA is working then something in PyTorch doesn't like your nvidia install. Go look at the PyTorch website for troubleshooting documentation and advice.
>Everything looks bad and takes a billion years to render.
If it's any consolation, it would have looked bad even with the GPU working. CPU and GPU should give very similar results for the same inputs. Making it look good depends on using a good prompt and settings.
>>452
>Thank you so much, anon. Obviously I made it as shitty as possible just as an example, I'd obviously go ahead and curate it better, but I'm satisfied with your answer.
Glad it was of use.
>Speaking of, if /v/ decides it doesn't want to keep these threads up anymore, I made >>>/ais/ just in case
I don't see these going away any time soon. Regardless, you'll want some rules and content if you don't want it just looked "parked".
>>453
>Any more helpful advice to gtx1060 users like me?
Have patience and learn to live with the limits of your VRAM? To be honest, the only real advice I can give for a 1060 is to read up on the trade offs between --lowvram and --medvram . Any other advice I can give would be generic to any set up.
Here's some half rambling collection of tips I can give you from my experience.
<Prompt crafting
Odds are you'll spend a lot of time iterate a prompt to get it "just right".
- Chose a model suitable for what you're doing. Pay attention to the "language" the model uses. For example: Stable Diffusion uses natural language; Waifu Diffusion and NovelAI use danbooru tags; and Yiffy uses e621 tags. Of course everything derived from Stable Diffusion inherits its natural language too.
- Set your batch size as high as your VRAM limits (this will change based on your image size).
- Set the number of batches per run as high as your patience allows.
- Use a constant seed. Any number can be chosen, but you want to keep it constant. When you're trying craft the perfect prompt you want to be able to judge if your latest change has had an effect. If the seed is being randomized each run then you'll be left wondering if improvements or failures are due to prompt changes, or luck of the dice roll.
- Use brackets "()" to emphasis things that are import, or add them if it's not giving them enough attention. You can nest brackets for extra emphasis, or use the ":" number notation. "(some phrase)" = "(some phrase:1.1)", "((some phrase))" = "(some phrase:1.21)" and "
(((some phrase)))" = "(some phrase:1.331)"
- I generally use a limit of "(some phrase:1.3)" for emphasis. Going much over this results in over emphasis and it trying to inject the phrase everywhere it can vaguely fit. However, if you do 1.4 and over, but don't see over emphasis, one of two things is happening: the phrase has little or no effect, or other things in your prompt are counteracting this phrase.
- Try to keep your adjective immediately before the subject of the adjective. This can sometimes help reduce the amount the adjective bleeds over into other things in the picture but it's not a guarantee.
- Try to use synonyms and related words if a word doesn't appear to be doing anything, or not enough.
- I typically start a prompt with a short phrase or sentence describing what I want in the picture and then add "," separated phrases and words to fill in details.
- Putting something in the background write "with X in the background" e.g. "with a busy market street at night in the background"
- Remember to use the negative prompt to get rid of or reduce unwanted details. Ladies with their legs wide open in the wholesome picture your trying to make? Use "spread legs" in the negative, or maybe "cameltoe".
- Simple smiles are too much? Try using adjectives on your smiles. Or be specific about expression or mood.
<inpainting
- Use "inpaint at full resolution". The will take your masked area, plus some padding, and enlarge it to your full image size. It will then inpaint that. The output is then scaled back down and pasted into the original image. This can work well for repainting fiddly details like hands. Doesn't work well if the outer bounds of your mask encompass most of the image. Sometimes zooming in too much stops it from working well.
- If using an image editor, try to make things look less wrong before doing a roll with inpaint.
- Start with the same prompt used to create the image, but don't be afraid to change it if necessary.
<misc
- Use X/Y plot script to experiment with the effects of changing parameters. Can help you get a feel with what the knobs do, while putting it in a nice chart.
- Forgot what prompt you used in a past image? By default this information. Use PNGinfo tab to look at the prompt and settings. (Sadly 8chan's software strips this information from PNGs.)
- Look at what other people's prompts to see how they've achieved what they did.
[Expand Post]
Settings for the image batch (using a Berry Mixer model):
a catgirl wearing power armor sitting in a helicopter cockpit, with a cityscape at night in the background, red hair, flirty smile
Negative prompt: lowres, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, monochrome
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 5, Size: 512x768, Model hash: 579c005f, Batch size: 8, Batch pos: 0