Skip to main content
Participant
May 23, 2023

P: Generated images violate user guidelines

 

So as you can see, it's a PG-13 relatively inoffensive image of a woman in a bunny outfit. The top worked fine, and I was able to complete the top ear, which is cool. When I tried to extend the bottom with generative fill, though, I got this warning. They're just a pair of legs wearing stockings, and I wanted to extend it.

It feels like a false flag - though I could be wrong? I find myself thinking it would do the same for women in swimsuits.

Figured I'd share here.

1085 replies

daniellei4510
Community Expert
Community Expert
June 2, 2023

OK. I've only tried this twice, but it's been successful twice, so maybe I'm on to something but don't hold me to it, I was attempting use generative fill to remove tape and a feeding tube from the cheek of a newborn baby. This was at the request of the mom because she wanted some decent baby pictures to look back on some day without the feeding tube showing up in every photo.

I made a loose selection of the tube and repeatedly received guideline violation warnings. On a whim, I made an additional but very small selection on another area of the photo (something I could easily fix with content aware or the spot healing brush).

On my next attempt, generative fill worked flawlessly, with three examples to choose from. I tried it a second time and had the same positive results. Then I tried it again with a different image of the same baby. If I only selected the tube, I'd get warnings. When I selected another small area of the photo in addition to the main selection, it worked perfectly once again.

Adobe Community Expert | If you can't fix it, hide it; if you can't hide it, delete it.
Graham24508943nobd
Known Participant
June 2, 2023

I'd like to know why asking for a wrestler in a prompt with ( with my own head left in ) I was given the body of a scantily clad female with ginormous boobies. Same result when I prompted a Viking. Scantily clad female !!  Double standard bots if you ask me 

Known Participant
June 2, 2023

1st prompt was "Cyberpunk Nightclub" with background selected

2nd prompt was "Cyberpunk clothing" with a box around her torso selected.

Known Participant
June 2, 2023

P*rn might be forbidden, and community standards will prevent you from say, swapping someone into lingerie or a bikini... but that won't stop the AI from getting creative on it's own.

Original and AI'd version.

  

Graham24508943nobd
Known Participant
June 2, 2023

I too have tried numerous empty prompts and with the same results you're experiencing. Seems to me it's getting worse. The hands are just appauling most of the time as well

Cody Achter
Participant
June 2, 2023

It blows my mind that we can use AI to generate anything to do with Alcohol but Marijuana is against guidlines. I run a weed business in a legal state (which almost half the US is now) and wanted to add a weed bud to a photo but nope cant do that! ITS AGAINST THE RULES! Because why? Weed is worse than alcohol somehow?....

 

Participant
June 2, 2023

I want to add legs to a photo of a caterpillar, yet when I write just the prompt 'legs' it thinks I'm being inappropriate?

carlosh75246858
Participant
June 2, 2023

Repeatedly, I have encountered situations where I am unable to utilize this technology due to the judgment of an algorithm that deems it unsuitable based on corporate guidelines. Explain to me how this is considered an improvement when we are given a tool that refuses to function. I would assume that your technology possesses enough intelligence to distinguish between what is doubtful and what clearly goes against corporate guidelines. This unfortunate outcome arises when legal departments become involved in the development of software. The face is blurred to protect the subject.

 

Participant
June 2, 2023

Dont think this is a inappropriate object

Photoshake
Known Participant
June 2, 2023

I suspect they do track every violation, but not tally them to say 20 strikes and you're out. There is a difference between the AI getting it wrong and someone trying to abuse it. 

 

I'd like to think I can have 1000 flags on innocent images or even non-pornographic NSFW images and they will review them and go "The AI failed here", but I'd like to think if it was a clear misuse, especially such as where a minor was involved and someone was trying to turn that image NSFW, then Adobe would and should report that if their prompt was intentionally trying to create harmful content. If it was an innocent prompt that lead to a NSFW image, then that is where Adobe would be at fault and that is what they want to avoid. 

Photoshop as it stands now can be used maliciously; it is not hard to take a NSFW image and put it on a celebrities head with simple blending skills and call it a deep fake, but AI can make this task a lot easier for the layman without retouching skills. That is what Adobe is trying to protect from.