Instead of writing all my tips into my Notion notebook, I thought I’d create a living document that’s public.
Hope you guys find these tips useful.
- SDXL native sizes
- Negative Prompt & Avoiding Wall of Negs
- An optimized workflow for hands
- Emphasizing and Deemphasize Prompt
- Looking for a Painting “In the Style Of”?
- Generate Forever in Automatic1111
- Play a sound when generation is
- The X/Y/Z plot inputs can take ranges
- My Sustainable And Repeatable Workflow for High-Res Images
- DPM++ 2M Karras optimal settings
- Outbound resources
- More AI tutorials
SDXL native sizes
When using SDXL or XL models, use these dimensions for the best results.
- 640 x 1536
- 768 x 1344
- 832 x 1216
- 896 x 1152
- 1024 x 1024
- 1152 x 896
- 1216 x 832
- 1344 x 768
- 1536 x 640
Negative Prompt & Avoiding Wall of Negs
According to this thread, you should only neg prompt according to the output you see.
In my opinion, overprompting might seem like “more pain more gain”, but having too many prompts can confuse you when you want to delete stuff, as well as confuse SD.
As it relates to negs, the author explains that these models don’t understand English or the images they create, but rather use weights to predict the picture in the latent noise.
If the training images aren’t labeled with the phrase used in the prompt, the model won’t understand it.
The author demonstrates this by testing various terms in a positive prompt, such as “hands”, “bad”, “deformed”, and “drawn”. The results show that these terms don’t significantly change the base picture or reflect the English meaning of the words.
The author also tests other negative prompts, such as “bad”, “deformed”, “drawn”, “poorly drawn”, “bad hands”, “deformed hands”, “poorly drawn hands”, “too many hands”, and “more than two hands”. The results show that these prompts often hide the hands in the image, rather than improving the depiction of hands.
Locke_Moghan summarizes:
- The language parser doesn’t understand English. They are just following a Chinese Room model.
- Don’t copy a bunch of negative prompts you grabbed from some random guy on Reddit just because he used it on his masterpiece, Greg Rutkowski, trending on Artstation image. Most of them are completely useless.
- Only use negative prompts when you actively need to fix a problem with an image you’re trying to generate.
- Use positive prompts as adjectives next to the thing you want to fix, rather than untargeted negative prompts that apply to the whole image.
- Test out a negative prompt as a positive one to see if it will express itself. Many won’t, because most of these phrases aren’t going to appear as captions in the training image set.
- Negative prompting won’t fix your aspect ratio problems. There are different solutions for that.
An optimized workflow for hands
At small sizes, such as 512x512px, you won’t get well-formed hands because SD can’t fill in the details.
At those sizes, your goal is to find good composition.
Once you find one, use PNG info to get the details of the chosen image such as seed, prompt and other details.
Now, use this and turn on hires fix.
Emphasizing and Deemphasize Prompt
() = more
[] = less
If you aren’t seeing the effect that you want, consider upping the emphasis by 0.1.
Let’s say you want the scene to be dark. Move it 0.1 point above 1.0 (the base value).
(dark:1.1)
Likewise, you can reduce the emphasis by using square brackets.
[dark:0.9]
Looking for a Painting “In the Style Of”?
If you are looking to create a painting, visit Urania which has a sampling of a painter’s style.
You can use their name in your prompt by putting “in the style of [painter]” and that will influence your output.
Generate Forever in Automatic1111
If you right click the “Generate” button, you will see the “Generate forever” button.
I’d use this when my prompt is on point, and it’s just a matter of rolling the dice until I hit the spot.
For example, if the context of your image is almost perfect, but the fingers of your small-batch aren’t good enough, then just running it for a long time will probably yield something eventually.
I prefer to put my batch count to 1 when I run this, so the time estimate will be exactly for the one image.
Play a sound when generation is
In your Automatic1111 folder, place a
by joe0185notification.mp3
file. This will play when you’re done with your image generation.
I tested this and it worked. Make sure to Reload UI.
You can get notification sounds here.
The X/Y/Z plot inputs can take ranges
by –recursive
- Rather than
1,2,3,4,5
, use1-5
- To take steps other than 1, such as
1,3,5,7,9
, use1-9 (+2)
. This works with fractional steps, too, such as4-8 (+0.5)
. Negative steps are also allowed.- To have it figure out the step size based on the count of values you want, put the count in square brackets. For example,
1-10 [5]
produces five values from 1 to 10 inclusively: 1, 3, 5, 7, 10
Pretty useful if you want to set your defaults in Stable Diffusion Automatic1111.
My Sustainable And Repeatable Workflow for High-Res Images
Here are the steps that repeatedly produces high-quality, print-quality results.
- Get a sample pose for ControlNet
- Prompt and pray
- Run CFG scale plot
- Inpaint erroneous areas
- Upscale
- Re-generate face
DPM++ 2M Karras optimal settings
According to this thread…
The original poster, roshlimon, mentions that different samplers require different levels of high-resolution fix denoising strength for optimal results.
They provide an example with Euler/ancestral and Karras samplers, stating that the optimal denoising strength for Euler/ancestral is around 0.4, while Karras requires 0.6-0.7. They also note that these values can change depending on the upscaler used.
ThrustyMcStab shares their experience with DPM++ 2M Karras, stating that it gives excellent results at a denoising strength of 0.45.
They also mention that increasing the denoising strength to 0.55 and above starts to degrade the image quality.
Outbound resources
My best learning resources are:
Reddit r/stablediffusion
Every day, there’ll be a new method and tutorial, along with a lot of graphics and stuff. You can learn a lot from just browsing this subreddit daily.
SD Compendium
A website with links based on category.
For example, if you want to find some links related to LoRa, upscaling or something else, then this is the first place to visit.
Stable Diffusion Art
A much more composed and collected resource that has many tutorials for Stable Diffusion.
Recommend to check periodically because the tutorials are top notch.
Upscalers database
Upscalers have their own niche. There might be general purpose upscalers as there are ones for other purposes like foliage, sharpness, manga, pixel art, etc.
Here’s a database where you can see a whole list of them with descriptions.
Have a project in mind?
Websites. Graphics. SEO-oriented content.
I can get your next project off the ground.
See how I have helped my clients.