Love the new AI stuff in the beta, But what if the AI, instead of just pulling off the web, can access a specific website you might have a license for stock images? That way, when the AI pulls form stuff, you know you are not on copyright violations.
As a promotion producer for a TV station, I know our company lawyers all over the AI stuff because of how it pulls from different sights. You can't deal with the copyright stuff. But say we pay for Adobe Stock, Shutterstock, or Storyblock if we could put in destinations for the AI to access. That would help with copy write issues.
You can't use any of it for commercial use right now so it's a non-issue. And - it doesn't just pull randomly off the web.
In addition to what Melissa said, see the guidelines here:
and run all copyright issues past your corporate lawyers.
Adobe's (generative AI) Firefly is mitigating copyright concerns by training their system with vetted royalty-free graphics & photos that have been curated from Adobe Stock's 100+ million asset inventory. 😯
However (and let's be brutally honest here), if you have any copyright concerns, you probably should not be using machine-generated artwork in commercial projects. But that's a much deeper legal & ethical topic. 🙂
Thank you, I know it's not for commercial use right now. That's not what I was saying. Plus even Adobe's Responsible Innovation blog talks about the AI Learn form images from Adobe and common use images that licenses have expired. I have used it several times to play around with NOTHING COMMERCIAL and even though it says it's recreating images, you can clearly see its pulling and copying things from imagies it learned. My sugestion IDEA was to be able to control the data base it's learning from that you have access to.
Thank you. I wouldn't be using it for a commercially generated spot, one, it's against my company policy, second I know how to create stuff from scratch, my comment was an Idea suggestion.
As things stand currently, the only way to regulate what AI is trained on is to NOT use AI.
Hi @Christopher2802 this may help explain things further.
As @Nancy OShea pointed out, Adobe has taken steps to purposefully train the AI machine learning model based on it's own source material and creative commons, thereby avoiding the pitfalls of non-licensed images showing in their models. Adobe's goal is to have copywrite clean images generated for users.
The training isn't something Photoshop does. That already happened on Adobe's servers, which study hundreds of millions of images, and store hundreds of terabytes of data. Photoshop is reaching into this collated learning, so you can't just tell it to look somewhere else.
You might want to take a look at this : https://www.adobe.com/content/dam/cc/en/ai-ethics/pdfs/Adobe-AI-Ethics-Principles.pdf
The problem with your proposal is that AI works through learning not just from pulling images for the specific output. You would therefore be asking AI to learn from scratch only on your selection of data and ignore what it has learnt from wider data. That sounds limiting in terms of effectiveness and very expensive in terms of different learning data sets for different customers.
Midjourney and Stable Diffusion allow users to train AI with user's original content. However, users often abuse this by scraping unlicensed content from websites like Getty Images. Getty sues Stability AI 1.8 Trillion for copyright infringement.
Given the latest U.S. Supreme Court ruling on what constitutes "fair use," Lynn Goldsmith vs. Andy Warhol, this could be a blood-bath for AI service providers.
Adobe is being necessarily prudent in keeping firm control over it's training content. Nobody can afford a lawsuit.
This small article is in today's Washington Post on Page A18. It may (or may not) answer your question.
There are a couple of other press releases; you can find more with a search engine: