I write SEO content for a number of professional businesses and therefore I need images which reflect the diversity of their clients.
Because some clients are in Asia, I am finding that I’d hit the perfect image on stock images, but the person in the stock image is not representative of my client’s clientele.
In the past, I would just skip that image. These days, I can just use Stable Diffusion to do a face swap.
I’ll show you how.
Find the perfect image
Scroll through your stock image libraries and find the perfect image.
I am going to use this image which I found on Envato Elements.
I thought this image would be perfect for my dental client because the woman has a nice smile. However, since the client was based in Singapore, it would be ideal for this image to look more Asian.
Crop the face out
I am running a Nvidia RTX2070, which works OK but it does have some compromises given that it’s a lower-end card for AI imagery purposes.
I could just shrink the image down from 6000px on the widest end to 1000px so that Stable Diffusion would be able to inpaint it, but that would mean my final image will be 1000px forever.
The alternative is to crop the face out, get it inpainted and then merge it back with the original image in Photoshop. This would allow me to be able to keep an image that’s as large as possible.
I am using the Automatic1111 GUI with Stable Diffusion so its actually very easy to do this step.
Upload the cropped image into the inpaint tab. Then, mask the areas of the face which you’d want to change.
Choosing a model for realistic faces in Stable Diffusion
One of the most important aspects is choosing the right model for your face swap.
For a simple face swap like this, you really don’t need much. Just describe what you need but also ensure you’re quite explicit about it, especially when it comes to the race of the person.
Some things that are critical are:
- Age (young, old, middle-age, “in her 20s/30s/etc)
- Race (European, Asian, African, Arab, etc. Avoid white/brown/etc. because the AI can interpret it as a colour rather than a race)
- Facial expression
I got these images by using the prompt, “youthful asian woman smiling”
Steps: 24, Sampler: DPM2 a Karras, CFG scale: 7, Seed: 3139388826, Size: 647×594, Model hash: de2f2560, Denoising strength: 0.35, Mask blur: 4
Merge output back with stock image
The final step is taking the output and merging it back with the original stock image.
In this case, since we have a constant — the teeth and the earphones, it’s a lot easier to align.
I like putting the new image to about 20-50% opacity, and then try to line the features up. When things line up, the areas which haven’t been changed will be sharp and not fuzzy.
Then, I start selectively masking out areas. As you can see from the image below, I did not use the portions from her cheeks and forehead.
This is because AI generated images can often lose the details on a face. So, I borrowed the texture from the original image.
Here is the final output.
The good thing about these two images is that the skin tone doesn’t require any correction.
However, if you are changing the image from a dark to light skin tone (or vice versa), then you will have to do the additional step of masking out all areas where you’d need a change of skin tone.
In the above image, this would mean the neck, arm and hand.
Stable Diffusion is amazing for marketing
Stock imagery is becoming less and less important in my toolset because of Stable Diffusion.
The images below are made using this method.
Note: if you are looking for a tutorial on how to use this face as a reference and paste it onto different images, I have a separate tutorial on how you can keep a face consistent by using a reference image.
More AI tutorials
Have a project in mind?
Websites. Graphics. SEO-oriented content.
I can get your next project off the ground.
See how I have helped my clients.