Copy link to clipboard
I spend quite a bit of time every day in PhotoShop, for a whole host of reasons. Creating art pieces, making promo work, logos, touching up a resume, actually working on simple photos...all kinds of things! It's got it's ups & downs, ins & outs, but I love it much more than any of the alternatives.
This new "AI" tool, where it takes stolen "training data" & fills a bunch of stuff in by remixing the stolen art, is making me reconsider using PS. If this is the future, I want out. If it's going to use my work to "generate" stuff for other people, I want out.
PS & Adobe used to be a company, or so I thought, that supported human creativity. These sorts of tools undermine human creativity on several fronts.
Is this the direction Adobe is going? Hopping onto the digital plagiarism roller coaster & working to wipe out real art?
Why? & how can I make sure nothing I make ever ends up stolen as "training data" for Adobe?
Copy link to clipboard
Generative fill uses training data from Adobe Stock, and creative commons images according to their press releases.
Since by submitting images to Adobe Stock you essentially sign over the rights, and creative commons are free use, what exactly is being "stolen"?
I really doubt most people taking the time to upload things to AS were doing so with the intent of letting Adobe make it so people didn't use their work at all. Sure, it's technically allowed, but it's not a purpose that was envisioned nor intended. And it's not like the tool gives credit when it spits out the garbled rehash. It's stealing, even if they have a loophole that allows it.
I also don't really trust Adobe to only use AS & CC.0 work in the future. Companies don't tend to become satisfied with just a little bit of overreach.
@Austin27542330tcjj , you are spreading misinformation.
Folks who sell on Adobe Stock can opt out. Details here:
In addition, Adobe has said that those who contribute to Stock will be compensated:
Thanks for highlighting that "content analysis" option—I switched it off.
If Adobe were really committed to doing this "ethically," though, off would be the default. Lots of people will never find out about that option.
Copy link to clipboard
Adobe is taking an ethical approach. Here are a few articles that you might consider:
This page has several links, including:
"If you have a question about AI ethics or want to report a possible AI ethics issue, please contact us."
Wall Street Journal
How Adobe’s Ethics Committee Helps Manage AI Bias
"Last week, Adobe made waves by announcing the beta release of its new text-to-image generative artificial intelligence (AI) model, Firefly. Adobe says its new platform wasn’t built using stolen images, but rather, as Adobe boasts, Firefly has been trained using Adobe Stock images, openly licensed content, and public domain content. Adobe is Building Its AI Model in The Right Way It’s an admirable way to build an AI platform, especially in the face of competing models that are built using stolen and unauthorized content.'
I realize they are using the term "ethical," but to me, there is nothing ethical about this kind of tool. If it's making something by taking humans' work & spitting out a pale imitation of it, that's not ethical, regardless of whether they technically have persmission for the training data they use—though I'm sure most people uploading to Adobe Stock never intended for their work to be misused in this way.
I've used generative fill to extend an image wider or taller and found it to do a decent, but not perfect job. The grout in the pavement didn't come through. I watched Terry White use it to remove glare from eyeglasses and make a photo a little wider or taller to "reveal" a shoulder or shoes that were never in the original photo. And although I know other ways to remove tourists from a landmark, but this will be faster.
You answered quickly, so you may not have read the articles yet. Be sure to follow the link to comment to Adobe.
On the Beta forum, there are a few folks complaining that generative fill is too restrictive. I'm finding out by reading that there is a lot of stuff that it refuses to do.
I've been following the story, I've read those links before 🙂
There is no ethical way to make these kinds of "generative" tools. Thank you again for pointing me to the buried opt-out option in your other response—but again, making it an opt-out, rather than an opt-in, shows that they are prone to choosing expediency over ethics. Just because they aren't being as bad as other orgs, like OpenAI, doesn't mean what they're doing is good.
... but again, making it an opt-out, rather than an opt-in
I hear what you are saying. Obviously, this is an issue for many companies, not just Adobe. Just as requiring credit card details for "free" trial services that expire after a trial period is also an issue, as then the CC is billed after the trial expires. In both cases, the business model relies on the majority of users to not opt-out or to not cancel their free trial before the CC payment conditions kick in.
To quote the classics: "Beware of the leopard".
AI is going to be a major problem given the way many AI services are treating it in many countries. Keep that big picture in mind when thinking about AI and Photoshop.
Of course, Adobe doesn’t automatically deserve anyone’s sympathy, because they are a multi-billion dollar international company. However…unlike many of the other AI services, it is important to realize that Adobe is doing these things that most of the others are not:
Documenting what images train their AI, so it is known that they are all from a non-infringing source. Because that source, Adobe Stock, like any reputable stock service, already requires that proper rights be secured for an image before it is allowed to enter. Creators who do not agree, are free to not participate, and their images will not be used for AI training (by Adobe).
Allowing the users of that source to consent and opt out, so that, again, it is known that all training images are from a non-infringing source and with their permission.
Also, because AI is so capable of confusing our ability to tell which image is real and not altered, Adobe has helped work out a Content Authenticity industry standard and get it into Photoshop, so that at least, if you look at a photo and content credentials are in there, you can see if it has been edited and by whom.
The point is: AI is going to continue to be a major problem for artists. But Adobe is putting together a way to do AI that, at least, will be a safer place to do it. In other words, in the future, our biggest problem with AI is probably not going to be Adobe. Because if you had the power to force Adobe to abandon all AI development, you would kill their efforts to establish an AI training source that is non-infringing to artists, yet you would still be left with all of the other less scrupulous sources of AI worldwide that are taking absolutely no precautions and hoovering up all of our images they can find online regardless of rights or legality. Because you’re not going to be able to put the genie back in the bottle.
Again, I’m not trying to make Adobe out to be some kind of saint, because they aren’t one. Just saying that of all the ways you can get AI imagery, Adobe is making moves that could make them one of the least harmful sources. They won’t be that way out of pure idealism or altruism, but because it will make it a lot easier for them to reassure their paying customers that there are not legal problems or rights infringements with their AI service. Because they will have the provenance trail to prove it for every training image. It is a very pragmatic and practical approach done for business/legal reasons, and as a fortunate side effect, those choices could potentially make their approach one of the less harmful AI business models for artists, when compared to the more questionable ways it’s done by many other sources of AI.
Adobe could also...simply not legitimize "AI" at all.
Jumping on tech trends is rarely healthy for large companies that take the long view. People made that same "business sense" argument around Adobe rushing to enable NFTs, & look where NFTs are now.
If the HWood strikes go well, if we get lucky with some regulation (or copyright court cases), if we manage to keep plagiarism engines out of art, this won't have made much "business sense."
I don't class Generative AI and NFT art as holding the same market significance.
Copy link to clipboard
You can try adobe firefly
I don't want to try Adobe Firefly, that's what I'm talking about in the original post. It's all plagiarism.
Copy link to clipboard
I don't want plagiarism fill, I don't want plagiarism expand, I don't want a new "context toolbar" that encourages me to use the plagiarism features. How do I turn it all off? Is there any way to just...make PS never ever bring up its plagiarism features? I really hate this. They're gonna have to take it all back out in a year anyway after this stuff flops in court. Why do they insist on pushing it on everybody?
You can uncheck the Contextual Task Bar at the bottom of the Windows menu.
As for your crazed plagiarism rant, do you realise that the Ai generated content is derived from Adobe Stock library? That means images that the ownerscontributed willingly for other people to use. Professional content creators use stock images for convenience, time saving, and because it is invariably cheaper than photographing every image element they need for their projects. It's entirely up to every one of us whether we chose to use it.
You don’t seem to be new to this Forum, so your behaviour seems more than a little peculiar.
Please read this and act accordingly:
Given what Adobe has said about how the images are sourced for their specific service, can you provide an example of plagiarism within their AI service, that is not explained by the images coming from vetted Adobe Stock or public domain sources?
Also, they seem to be confident enough about the service being non-plagiarism that they are willing to indemnify enterprise customers for the content created with these features…in other words, they’re willing to defend it in a court of law.
Basically, if anything stops generative fill in general, it isn’t going to be the little guys, it’s going to be the powerful legal departments of large companies who do not want to get caught up in these kinds of lawsuits. An AI service can succeed if it’s set up to not be subject to those kinds of lawsuits because they can document the rights status of the source images, and hopefully even compensate the creators. Then the plagiarism argument disappears, because if permission is either legitimately obtained or not required for a work, it isn’t stealing.
Companies who build AI by scraping the entire Internet regardless of permission may lose in court. Companies who can prove that they only use images with secured rights or images out of copyright, will be able to continue offering their AI service.
Another angle on this…although there is real concern about people making up entire new images based on other people’s images, if you look at the tutorials for Photoshop generative AI out there, a very large proportion of them have to do with how useful it is for basic repair and extension of your own art; it isn’t always about using other people’s art and styles.