Generative fill quality has become unusable since the beta
During the beta period, generative fill and generative expand was an amazing feature. It allowed us to add believable, high-resolution objects to photos, to fill large areas with imagery that convincingly expanded the image, and to create high quality imagery that looked harmonious with the rest of the photo.
Now in the main release, the generative fill feature works only at very low resolution (1024 pixels per generation) and the results look very unconvincing and very "AI-like" unless they are only trying to be a generic texture. Something as simple as adding a tree to a photo that looks like other trees already in the photo is now nearly impossible to do in a way that looks any good.
It seems generative fill is now more or less only usable as a slightly more advanced form of content-aware fill for small areas that need patching, which is not at all how the feature was originally demonstrated or how it is advertised
Photoshop users work with images that tend to be high resolution. Thousands of pixels tall or wide is quite common. Generating fills in a series of 1024 pixel bits one at a time is not particularly helpful other than for replacing small blemishes or hiding tiny objects.
The beta version was fantastic and looked like it was going to be a sea change in the power and use of photoshop, so the adobe team clearly has a way to provide this feature that is actually useful. However, the extreme reduction in quality both in terms of resoultion and generated image realism makes the production version nearly useless and nothing like the amazing beta version.
Please, Photoshop team, bring back the resolutions and the AI model quality that the beta version of generative fill and expand introduced.
