Categories
Tutorials

Stable Diffusion Poster Creation with Photoshop: Tested Workflow

I believe that Stable Diffusion will allow you to get better images for your marketing posters. Here’s what we will achieve:

  • Generate people for poster
  • Upscale AI-generated poster image
  • Ensure sharpness at A4/Letter size

Table of Contents

two girls generated by stable diffusion ai giving thumbs up

Why Stable Diffusion? For one, you can get people in an specific pose.

Having two people in your poster giving the thumbs up, smiling widely and looking enthusiastic really gives a positive boost to the poster’s eye-catching prowess.

With the image above, I created the poster below.

stable diffusion poster

You might notice the people in the original image and the poster’s image is a bit different. There’s always a bit of clean-up done in Photoshop.

Let’s walk through the steps of how I did the poster.

Important: know your dimensions

You should know the dimensions you need as early as possible because it has implications on how you undertake the upscaling of your image at a later stage.

Let’s try for an A4/Letter poster. If you need an A3- or Tabloid-sized poster, scale the numbers up accordingly.

At 300dpi, we’ll need 2480 x 3508 px for an A4 poster.

My poster be 50% image, so we’ll need an AI-generated image that is about 2400px wide and about 1800px tall.

Get a sample pose image for ControlNet

For starters, you will need to get an image with two people posing with their arms in the position that you want.

You really don’t need to have an exact thumbs-up pose since OpenPose won’t capture the thumbs up.

Using Openpose requires an article that will take you through the steps. For brevity, I’ll walk through the steps quickly

With this OpenPose outline…

openpose outline

You can generate these:

You might be wondering: don’t I need an image where the person is actually giving a thumbs up?

If you were trying to use some other preprocessor other than OpenPose, then sure.

However, OpenPose filters out all hand gestures, so it doesn’t matter.

You’ll add the thumbs up in by prompting.

Prompt and Pray

It’s time to prompt and pray. Here’s the prompt I used:

Prompt

Photorealistic image of two people on a white background, smiling, teeth, thumbs up, detailed skin, skin pores, wrinkles

Negative prompt

B&W, naked

Other settings

  • Steps: 20,
  • Sampler: DPM++ 2M Karras,
  • CFG scale: 5.0,
  • Seed: 2587407152,
  • Size: 768×768

Run a CFG plot

If the default settings aren’t giving you what you need, then run a CFG scale plot.

Recently, I have needed to run a CFG plot when using newer models. Or else, they end up looking too cartoonish or contrasty. It seems that lower CFGs keep the contrast low.

CFG scale plot in Automatic1111

Go to the bottom where it says “Script” and then click “CFG Scale” on the X-type.

Type in some values. Usually if you run a plot from 1 to 20, you’ll see a major difference. With the output, you can hone in on the most appropriate CFG value.

Inpaint areas erroneous areas

Look closely at your image. Are there areas which look unnatural?

The next step is upscaling. Before you upscale, always fix the problematic parts.

stable diffusion blouse buttons problem

The blouse’s buttons are incoherent. Either use inpaint to replace the area:

Or use Photoshop to Content-Aware Fill.

content aware fill stable diffusion

I used Content-Aware Fill because what I wanted was an even surface. I didn’t want buttons.

Upscale your poster image

My graphics card is on the weaker end of the minimum requirements, so I can’t generate a large image from the get-go.

Remember, the image specs we require is 2400px by 1800px.

Powerful GPU? Here’s your upscaling workflow

If you had a stronger GPU, you could probably generate something that is 1200px by 768px, then run a 2x latent high-res pass.

It will get you close to the required dimensions, and this will give you the best result in terms of details. If it still isn’t good enough, then run another 2x pass of 4x UltraSharp.

Olivio Sarikas has a video outlining this.

My workflow with a weaker GPU

I have a Nvidia 2070 with 8GB RAM, so high-res fix doesn’t work. It generates a ton of artifacts.

In this case, I could only run a 4x-UltraSharp upscaling. I ran it at 2x. This produced some artifacts.

In the image below, we can see some artifacts. Teeth have deep black gaps between them.

Eyelashes look like needles.

Eyes look completely unnaturally dark.

These need to be fixed using inpainting.

Fixing an upscaled face

If you have a powerful GPU, you might be able to run an inpaint of a 2400x1800px image, but I can’t.

Instead, I had to crop the sections that I wanted to work on and then send it to inpainting, then merge the output back in Photoshop.

When cropped, these sections were about 700px square. That’ll work with my RTX2070.

photoshop stable diffusion face inpaint

Crop the face out.

Save it and bring it into inpaint.

stable diffusion inpaint improving facial features

Only inpaint the areas that you want replaced. In this case, it’s the eyes and the mouth because the teeth look unnatural.

Aligning a face in Photoshop

aligning a face in photoshop after stable diffusion

Bring the new image into Photoshop and then reduce its opacity. I like to try 20%, 50% and 70%.

Make sure you can see the outlines.

Then, align the parts that haven’t been changed. For example, if the ears, nostrils or wrinkles haven’t been inpainted, then you can use that as alignment benchmarks.

photoshop merge face align to nose and wrinkles

If the two images still produce ghosting, then your images aren’t aligned. Nudge it until it’s sharp.

Finally, mask everything, and then start revealing only the parts that are newly generated.

The logic behind this is that by making fewer changes to the original image, you will get fewer “gotchas”.

For example, you could end up having alignment issues that you didn’t notice if you just used the face wholesale. Then, you notice that you have an outline of an eye mysteriously appearing below the original eye.

By only using smaller parts of the replacement face, you can more easily check specific parts of the merged image to ensure everything is in alignment.

masking merged face in photoshop from stable diffusion

Creating the poster

Just a caveat. Since my goal was to promote a language exchange, I realized that the people in my poster weren’t reflective of the crowd.

I did a little more inpainting to change the look of the woman on the right to reflect the participants. The steps are the same — extract face, inpaint, merge.

I had to choose between Adobe InDesign and Photoshop to do this.

InDesign is good because it can tell you your photo’s PPI in the Info window, plus you can set up bleeds and margins.

In the end, I chose Photoshop because it’s easier to do graphical effects like having the fading background text.

placing text behind people in photoshop

Stable Diffusion turbocharges poster creation

The rest of the steps to creating a proper poster is simply back-to-basics Adobe skills, so I won’t bore you.

photoshop poster creation with stable diffusion

There’s more advanced ways you can use Photoshop and Stable Diffusion together to create models holding your product.

Have a project in mind?

Websites. Graphics. SEO-oriented content.

I can get your next project off the ground.

See how I have helped my clients.