Contributors should not be allowed to attach the name of a real place to an AI image; however, the Moderators don't know what every place on earth really looks like, so we end up with such false images in the database. Unfortunately, "Buyer Beware" applies in such instances, and it's up to the Buyer to validate that the image they're licensing really is the place it purports to be.
Hello @Jasckal, sorry for the issue with the image. I've checked internally on this type of report previously. It's tagged correctly as Generative AI and there is no prohibition on creating inaccurate depictions of places. If that were not the case I would submit a ticket internally to have this reviewed for removal.
That may be your internal instructions, but external public instructions say this:
Files may also be considered noncompliant due to watermarks, inappropriate or irrelevant keywords or titles, or questionable or defamatory content.
In addition:
(...) Using long, non-descriptive, repetitive, or irrelevant titles and keywords can be perceived as spam as well. Spamming is strictly prohibited and may prompt us to block your account or close it permanently.
Obviously a picture not showing the correct location, but saying that it is that location is using irrelevant keywords or titles, and they should be rejected. Obviously, there is some kind of appreciation in this, as I could create the Eiffel Tower as an illustration and use that as a symbol for Paris, without showing the real town and tower. But using a location keyword on a picture that looks like a photo, without being that place, may be considered as deceptive.
I agree with you, someone would never expect a generative AI picture to be the real thing, but still, as a customer I would loudly complain:
Locations also benefit content: Add the country depicted in your content to help customers find regionally relevant content. For example, if you include a city, state, or province in your keywords, also add the country. (London, England has a different feel from London, Ontario.) Also, a kitchen in Mexico looks different from a kitchen in Japan, Norway, or the US. Customers expect accuracy, so don’t use multiple locations that an asset could depict. Conflicting information reduces trust and often results in customer complaints. Customers and their audiences know the difference between Hawaii and Costa Rica.
(Itallic text is from contributor instructions from publicly available Adobe sources.)
But yes, at the end of the day, the buyer should be aware, that when buying generative AI stock assets, labeled as a specific location, that it surely is not that location. A reason more, as a buyer, to avoid generative AI, especially as probably real pictures are available.
ABAMBO | Hard- and Software Engineer | Photographer
/t5/stock-bugs/quand-la-description-d-une-ville-ne-correspond-pas-du-tout-%C3%A0-l-image-propos%C3%A9e/idc-p/14969855#M21669 hours ago9 hours ago
Copy link to clipboard
Copied
@Abambo and everybody on this thread makes very good points. The decision on this however is outside of my control. Some of the guidance I received around this topic:
" It's correctly labeled as GenAi and could be used for any number of end uses. Adobe is not responsible, and expressly disclaims any liability, for the accuracy of any Stock Asset, including any related descriptions, categories, captions, titles, metadata, or keywords included with any Stock Asset."
I sincerely appreciate your help with this, and I know that you are working in the limits of your internal instructions. But cases like this are really striking down the usefulness of Adobe stock. On the other hand, buyers should be aware that generative AI is not to be used to design a travel guide.
ABAMBO | Hard- and Software Engineer | Photographer