Palestinian / Israel AI images
Copy link to clipboard
Copied
Hey all!
Now that the internet is flooded with fake pictures of the war in Gaza, Adobe Stock should be completely free of those types of pictures. I honestly cannot understand how they have managed to pass the moderators.
What do we think about this?
Copy link to clipboard
Copied
Do not involve such war and political issues in Adobe, they are nothing but nonsense and a poor person who wants to get attention.
Adobe is a free institution, you can express yourself with your work.
Copy link to clipboard
Copied
I honestly cannot understand how they have managed to pass the moderators.
What do we think about this?
By JoBo
If they expose technical issues, violate your copyright or contain inappropriate prohibited elements, you can denounce them here. In all other cases they fulfull the requirements of Adobe stock. They are like all generative AI assets fake pictures and the buyer is well aware of that.
This does not mean that I personally agree with creating such assets.
Copy link to clipboard
Copied
This should be about the purpose, not the awarness of what is fake or not.
Copy link to clipboard
Copied
The purpose is unfortunately a bitter reality. If I would write about the conflict and would not have access to real photographs, I would probably use such assets for illustration.
Copy link to clipboard
Copied
I have worked in editorial for 35 years and we would never think of doing something as stupid as using AI generated images to enhance the content of an editorial text. Editorial credibility would immediately fly right out the window. When you work with serious journalism, you don't use those kinds of pictures. It's as simple as that.
Copy link to clipboard
Copied
And now we see what AI will do to journalism....
Copy link to clipboard
Copied
I have worked in editorial for 35 years and we would never think of doing something as stupid as using AI generated images to enhance the content of an editorial text.
By JoBo
Sure, when you are a journalist, you have access to real pictures for your stuff. If you are a part time blogger with a limited audiance, you do not necessarily have access to those pictures. 35 years ago there were no generative AI assets, btw. But you would do collages, illustrations and other stuff. AI just adds to that other stuff.
Copy link to clipboard
Copied
As a serious journalist (or blogger), there is only one thing that is relevant - never to mislead the reader as to the authenticity. This applies to both text and images. The same rules also applied 35 years ago, but now it is even more important to be on your guard. This is precisely why you should avoid images that the reader may mistake as genuine. There are no exceptions to that rule, regardless of whether you have one reader or a million. The editorial principles apply regardless of context.
Copy link to clipboard
Copied
As a serious journalist (or blogger), there is only one thing that is relevant - never to mislead the reader as to the authenticity. This applies to both text and images. The same rules also applied 35 years ago, but now it is even more important to be on your guard. This is precisely why you should avoid images that the reader may mistake as genuine. There are no exceptions to that rule, regardless of whether you have one reader or a million. The editorial principles apply regardless of context.
By JoBo
Who here said that the reader would be mislead. The reader can also be mislead by edited images as has been shown in the past. So, the fact using an AI generated asset does not add to this. You are biased against generative AI assets, and you think that they should not be used in the wild. I'm too, as a photographer biased against generative AI assets. At least against what I see mostly. But it is a difference of what I think personally and what the reality is and will be. And generative AI will enter serious journalism as any other technique did before. And unserious journalism will misuse generative AI, as any other technique before.
Just for the context: Twitter is full of misleading articles with real pictures out of context or from other provenience. Some are taken even out of videogames. We do not need to wait for generative AI to create misleading and fake news https://www.wired.com/2008/07/iran-missile-ph/. So there is no need to attempt to open a discussion on generative AI. As soon as you can do a reverse image search, you will see the image's provenience. If the image is a generative AI asset or a differently misused asset.
The real issue is using any picture and text to create fake news and fake facts. That's why fact checkers are important, if you cannot yourself determine the truth.
Copy link to clipboard
Copied
It is clear that you have no grasp of journalistic principles. To understand why an editorial image can never illustrate anything other than what the text illustrates, you need to take that course. You need to make a distinction between what is a report and what is a story or a description. In the case of stories and descriptions that do not directly refer to a report, then illustrations are used. When it comes to reports, authentic images and illustrations are used, but NEVER images that could be mistaken for being authentic.
Now I think you understand a little better how it works.
Copy link to clipboard
Copied
It is clear that you have no grasp of journalistic principles.
By JoBo
Let me say that you can't judge that.
