Issue 1: Generating a fill as a transparent overlay. If I want magical sparkles overlayed, that's what I want. I don't want it to alter the background or blend it - just add it.
Issue 2: Generating a fill as a transparent overlay with 3-dimensional object recognition so it doesn't alter any of the other layers and blends it with shadowing AND OR doesn't alter the other layers but blends shadowing from the add: Why, with all of Adobe's amazing software and power is this still not a thing? In order to render images and cast the shadowing correctly, the AI HAS to recognize the 3-dimensional placement of objects. So, if it can analyze the photo: house, person, front, back, etc., recognize light placement, shadows, then when I add a transparent object (a tree, rocks, magical sparkles, a light bulb, etc.), the AI should be able to render that into the image. This should extend to fonts in Illustrator, where I can take a core font and use the AI to break it up, add to it, etc.
Issue 3: Adobe trying WAY too hard on "real". Yes - real is good. Having art styles beyond what you offer in Firefly (which is, I'm sorry to say, decades behind other online AI models and I have to use other AI models for core rendering and then bring it "in" to Adobe and go through hours and hours of layering and fixing making it almost a useless endeavor sometimes), But, we need to be able to render in multitudes of art styles and art forms and with better results. More importantly: Add in a tree / object / change / alteration that maintains the style of a picture. So, if I've created an image in Midjourney in the style of Michaeangelo, then brought it into photoshop (because Midjourney has massive issues, too), to alter it, my alterations whether of the existing image or a layered add-on should be able to stay with the image style I have.
Issue 4: Rendering text - I use Illustrator for all of my text. Illustrator is frankly, amazing. However, if I want 3-dimensional or specially rendered fonts, where I keep it in the style, shadowing, or whatever of the current image, this should be something the AI can handle.
Issue 5: The AI is not well trained yet (especially in regular photoshop. I regret going back to that and uninstalling Photoshop Beta - why they're not being run together and the updates made to just one instead of both is increasingly frustrating). If I ask for a full body figure - no. If I ask for a statue of an angel - no. The longer and more detailed prompts - make it worse. So, I end up with more randomness that is highly ineffective. Until you've trained your AI to work at the level of Midjourney, Yodayo, or some of the others, it is still too ineffective. Sadly, Midjourney is still all over the place and ineffective, itself.
Issue 6: Limitations on verbiage. Sorry, but if I am doing a venus de milo statue, it has some clothing missing. Adobe is being very restrictive on this. Even my post has been moderated, requiring certain verbiage removed and it was QUITE appropriate. This has to stop. You're an art software. I can't generate 1/2 of what I need for halloween because of your restrictions.
Issue 7: The responsiveness of the AI to simple commands like "remove" is wholly ineffective. If I try to remove a shadow off of a house and driveway, for example, it's going to re-render the whole thing, take out elements, and not simply remove the shadow. Removing the shadow is adjusting the light and colors (hues, vividness, etc. - all features in adobe that the AI should use rather than trying to recreate the image from scratch). Furthermore, it does not remove the whole shadow, or a whole table, it simply removes some portion that somehow it randomly chooses.
Issue 8: Object Removal: Okay, this is "okay", but very dysfunctional based on the AI trying to select elements around the bounding box and assume the filler using colors. Sometimes, it seems like the AI recognizes images, other times, it doesn't.
Issue 9: Expanding Object Recognition: Object recognition would be phenomenal. If I have concrete - I can make it weathered, overgrown, cracked, etc. If I have water, I can change its colors or add bubbles while keeping it transparent. The object recognition database still seems to be lacking. The 3-dimensional object placement in a picture is definitely lacking. This should not be an issue since these are still images, not moving, and it's far less of a load on the processor.
Issue 9: Finally, I have noticed over the past several months an increasingly overwhelming failure in Adobe to optimize to my computer. Why? I have geforce 3060 with 8 gigs of ram, I have 32 gb on board, i7 with 16 core hyperthreading, alienware, etc. etc. - I can render a LOT. I can run the unreal engine at full strength - in video and rendering mode. But, Adobe products have been crashing, both beta and regular. Why is this? Something has definitely changed. Whether I am simply scrolling through my horrifically large db of fonts to pick the best one for the text on a large photo, or trying to use neural filters more than once a session, adobe products are crashing, including photoshop. This is disappointing and making the program almost useless for these neat "add on features". I love adobe. It does it all - almost. But, the add-on features that you're building are not going well.
So - I am reinstalling photoshop beta. I am still using Illustrator beta. I don't know why I can't use the main software as the operational stability is normally just fine. But, these changes I recommended, including recommendations before, would be amazing.