tl;dr - image to image, rather than text to image.
This is an enhancement of the Generative Fill tool.
The Problem: Generative Fill sometimes doesn't "get it." Sometimes, I can describe exactly what I'm looking for, and wind up hitting "generate" over and over without the AI getting near the mark. AI also sometimes has trouble with scale.
Rather than coming up with a dozen ways to describe the same thing in the hopes that the AI will understand, let me show the AI what I want.
Here's a breakdown of how it would work:
- Object Placement: I start by placing a photo or graphic of an object onto a scene in Photoshop. This object could be anything from a person to a piece of furniture to a building. I can set the scale.
- Generative Fill Invocation: After placing the object, I select the option for "Adaptive Object Integration" in the Generative Fill tool.
- Object Recognition: Generative Fill identifies the object I've placed within the scene, recognizing its shape, texture, color, and other attributes.
- Adaptive Synthesis: Generative AI synthesizes a new version of the object that seamlessly integrates into the scene, adjusting the lighting, perspective, shadows, reflections, and other visual properties to make the object appear as if it naturally belongs in the scene.