Skip to main content
Participant
May 23, 2023

P: Generated images violate user guidelines

 

So as you can see, it's a PG-13 relatively inoffensive image of a woman in a bunny outfit. The top worked fine, and I was able to complete the top ear, which is cool. When I tried to extend the bottom with generative fill, though, I got this warning. They're just a pair of legs wearing stockings, and I wanted to extend it.

It feels like a false flag - though I could be wrong? I find myself thinking it would do the same for women in swimsuits.

Figured I'd share here.

1084 replies

Leslie Moak Murray
Community Expert
Community Expert
June 8, 2023

She has six fingers on her left hand and the inner bend of the elbow line on the raised arm is wrong.  A few more "AI hands":

Graham24508943nobd
Known Participant
June 8, 2023

Yes you're spot on of course, a beta gets the bugs ironed out before the next update and the testers are informed what fixes have been done. Adobe, you REALLY DO need to intereact more with us to get the job done better

slaughters85j
Participating Frequently
June 8, 2023

I apologize in advance for those that know this already:

 

All Adobe has done has incorporated Inpainting and Outpainting with the option to weight each (prompt/no prompt) like literally hudreds of other applications are doing right now, spun-up largely due to the whole AI craze; and Adobe's marketing team has dubbed this Generative AI.  Look here to see when OpenAI debued outpainting last August, look familiar? https://openai.com/blog/dall-e-introducing-outpainting (click the time lapse)

 

Without nerding-out too much, all of these prompt-to-text, inpainting, and outpainting features popping up are all made possible by iterative models trained on very large datasets of original imagery, styles, content, etc. (source of the original models everything is based-off stability AI's models.  Unless you build an original model namely stable-diffusion 1.x - 2.1, which is very time and cost intensive, then your stuck relying on the dataset and controls that the original was made with.  *stable-diffusion is the OpenAI Chat GPT Natural Language Processor service equivalent for exclusive image generation.  As such, there are a finite amount of controls applicaiton producers can place on their respective services.  I would place Mid Journey's (perhaps the most widely known and strait forward ease of use service) service as about as restrictive as Adobe's, Mid Journey is more text to image and image-to-image exclusive (weighted by text/weighted by an existing image). *there are plenty of others for both Mac and Widnows but do require a greater learning curve.  The results of these others can be fed as input images into Adobe PS but require additional steps in your pipeline.

 

For Adobe, there are really three'ish zones, fully restrictive (I think the mutual consensus is likely that this is where the current beta can be placed), the mid gray zone area (where some controls are in-place to restrict a good chunk of what most would consider in-appropriate content), and then the lower zone being full and open, where the only 'controls' are via trusted ethical notes from the model creators.  Adobe needs to implement controls that bring the their Generative AI service down to the gray zone, with the understanding that no service will be able to prevent mis-use completley.  Regardless, there are many other options out there, none of which, including Adobe, have breadcrumbs that lead back to which application or service was used to create resulting images.  If Adobe wants complete control then they will need to build an original dataset that is solely owned and controlled by Adobe - if this is what they've done, then they need to train it further with gray zone content - otherwise user will and can (for free) use other services and Adobe will find themselves trailing behind.

 

Since launching their Beta, I haven't seen any iterative improvements to the current version.  Not how a Beta is suppose to work, ....Adobe developers?

Graham24508943nobd
Known Participant
June 8, 2023

As I've mentioned before, it'll give you a guideline error even if NOTHING is entered in the prompt box. I've tried removing a stone from one of my own pictures taken last week in woods nearby and even THAT got rejected. Just wish they'd get the bugs ironed out soon before I lose my head with it and go back to using the old PS 2023

Participating Frequently
June 8, 2023

The gen fill engine appears to be analyzing the request and doing a drawing. Then it determines what it drew is x-rated, or not permitted, so the warning pops up. I tried to change the curving lines on a child's cartoon of a big bears paw and I am assuming it drew a curvy nude butt and rejected it's own drawing. The only answer will be that instead of this rejection, we get a pop up screen where we actually describe what the image is and what we want. Then Adobe uses Chat GPT (no kidding)  to analyze what we said and create a prompt. I saw a YouTube video where a guy did simple, easy training for Chat GPT to write brilliant prompts for Mid Journey. The problem with this beta gen fill is it does not write good prompts for itself because it is jumping to conclusions and needs the users help. A flat out rejection like it is giving does not help.

Participant
June 8, 2023

90% of the images I try to generate end up with "The images violated user guidelines" (see attached image).

The error pops up so often that the AI generative fill is pretty much unusable at this point.

 

 

Participating Frequently
June 8, 2023

Photoshop Beta v24.6.0

MacBook Pro 13 in, M1, 2020

Mac OS Ventura 13.4

 

I just typed "tesseract" on a blank canvas expecting the usual 4-dimensional cube. Instead I got the message that said the image violated guidelines.

Participant
June 8, 2023

I'm not here enough to know if this is being discussed or not, but entering a prompt is hit or miss for me.  It worked a few times tonight, then absolutely refused no matter what I said.  I was editing a building with not a person in site. 

Participant
June 8, 2023

Now that you mention it, I am editing things that are sort of skin colored. The other thing I noticed today is that it seems to give me a really hard time when the selection is oblong and "skin" colored. What's really funny is that yesterday I edited a piece of crown molding that was kind of shaped like that. After fighting with it for a while, it filled alright .. looked like a colorful version of the very thing it's trying to prevent me from generating. lol

Known Participant
June 8, 2023

I was just changing some skin blemishes and getting the error. But by putting just the letter S in the text box it worked OK