Skip to main content
Participant
May 23, 2023

P: Generated images violate user guidelines

 

So as you can see, it's a PG-13 relatively inoffensive image of a woman in a bunny outfit. The top worked fine, and I was able to complete the top ear, which is cool. When I tried to extend the bottom with generative fill, though, I got this warning. They're just a pair of legs wearing stockings, and I wanted to extend it.

It feels like a false flag - though I could be wrong? I find myself thinking it would do the same for women in swimsuits.

Figured I'd share here.

1085 replies

jdavidbuerk
Participating Frequently
July 11, 2023

@Gregusaurus It's not outlandish at all.  Such digital watermarking utilities and services already exist for still images, video, and audio, and are commonly used for tracing and enforcing copyright protections.  "Against the rules?"  Yes, the rule of law - the reason the filtering is in place at all is to prevent laws from being broken; Adobe has to protect itself from liability of their services being used for illegal activity, and I hardly think it should be controversial to pursue people creating illegal imagery.

Inspiring
July 11, 2023

That is one frightening recommendation. For a watermark to be implanted that can be used by the "authorities" to track down the artist who dared to create something that is deemed "against the rules" is so horrible that I am shocked it is being proposed.  Maybe you are being sarcastic? 

jdavidbuerk
Participating Frequently
July 11, 2023

Reminder to those following this issue; it's been said quite a few times already, but there seems to be some confusion about the "violation."  This generally is not an issue of the words you're using being "censored," but instead it's the AI's output image that is flagging false positives via the overly-restrictive image monitoring that is in place prior to the output being released to a user (likely due to CSAM concerns, but as obviously demonstrated, the filtering is much too strong for typical usage).  Yes, a word filter is surely in place, however that's not what's causing the problem for most cases shown in this thread, where people are inputting innocuous prompts such as "wall," "arm," "background," and so forth.  The reason changing your input terms will frequently help avoid the percieved violation is not becuase of the words input, it's becuase the AI's output has changed and is no longer being internally flagged.  This is the same reason even a blank input prompt on an innocuous image can still be flagged as "inappropriate."

 

I think most people talking about the Generative AI problems agree that the filtering in place is currently much too stringint for regular, practical use, so this is the real core issue at hand: balancing useful flexibility and expression using the AI with reasonable safety and legality.

 

My reccomendation to solve the problem of balancing artists' freedom with safety from nefarious use is to employ a digital watermarking function linked to a user's Adobe ID (and presumably the history of their use of Generative AI) on files that have been output using Generative AI, and remove all the server-side filtering except for input keywords of illegal acts.  This way, users are not limited in their use of Generative AI to much more freely work on their images, and Adobe is free from the liability that their service is not being used for illegal activity.  And should someone be using the software to create illegal images, real or fictional or otherwise, the embedded watermark would be able to be looked up by Adobe, their access to Adobe services banned, and their identity shared with the proper authorities.  My understanding is such watermarking technology has existed for quite a long time, and can be employed in a combination of both embedded metadata (which Photoshop of course already does), and inperceptible pixel patterns embedded in exported images (essentially like an imperceptible QR code); printers and scanners for example include similar such countermeasures to trace criminal activity, such as forgery.

Graham24508943nobd
Known Participant
July 11, 2023

Thanks for that Dave, big help !   I just imported one from an old image I had removed background and used that. Didn't faff about as much as I could have with this of course.  The two lassies had horrible defects so it took ages tweaking, the hands were the worst to correct.

 

davescm
Community Expert
Community Expert
July 11, 2023

'Sporran' on its own resulted in a violation but 'sporran on kilt' was accepted here

Dave

Kevin Stohlmeyer
Community Expert
Community Expert
July 11, 2023

@Graham24508943nobd Generative fill is US English only at this time. Sporran is a Scottish word.

Graham24508943nobd
Known Participant
July 11, 2023

well it doesn't work at all on the beta

daniellei4510
Community Expert
Community Expert
July 11, 2023

Sporran works fine on adobe.firefly.com. It is not a banned word.

Adobe Community Expert | If you can't fix it, hide it; if you can't hide it, delete it.
Graham24508943nobd
Known Participant
July 11, 2023

Latest disallowed nonsense is sporran. Admin, please pass on and correct. Sporran IS correct spelling and a main accessory to a Scottish outfit.

jane-e
Community Expert
Community Expert
July 11, 2023

@zvi_t 

 

I'm glad "spatial" worked for you! I've been noticing the misspellings that cause errors and have on-purpose misspelled words. Several folks have wondered why on earth "elefant" would cause an error, for instance, and I agree that spell check would be a good addition. Also, there are many folks who are writing in English when it's not their native language. That has to be tough.

 

Adobe has been moving all of these posts into one thread so they can be reviewed while GF is still in beta.

 

In addition, see Pete's pinned reply for sending files with false flags to the Photoshop team.

 

Jane