Skip to main content
Valdair Leonardo
Community Expert
Community Expert
September 29, 2025
StickyQuestion

How (AI) Data Ethics Shape Your Creative Workflow?

  • September 29, 2025
  • 33 replies
  • 3823 views

One thing that really stands out to me with Adobe’s Firefly-powered features, especially the new Harmonize feature in Photoshop, is that it’s trained only on licensed or public-domain data, never on our personal projects.  This means the native Adobe AI model in Photoshop (Firefly) is designed to work for you, not from you. In other words, it helps you create without using your personal projects as training data. 

By contrast, other AI models may use different methods and don’t always provide the same clarity, leaving the consent process less transparent.

 

So here are my questions for the community:

  • Do you think an ethical approach to training data changes how much you trust AI tools?

  • Would transparency about not using your work without consent make you more likely to try features like Harmonize?

  • More broadly, how does the way AI models are trained affect your creativity and your willingness to see them as partners in your process?

I’d love to hear how this transparency shapes your willingness to bring AI into your creative workflow, whether you’re using Adobe’s tools or not.

33 replies

Valdair Leonardo
Community Expert
Community Expert
October 9, 2025

Sorry to hear that @daughtrey_7277 

Valdair Leonardo
Community Expert
Community Expert
October 9, 2025

Thank you so much for bringing your view, @focused_enthusiasm3800 , yes, especially on websites with forms collecting data it gets even more concerning. 🙂 

Participant
October 7, 2025

Honestly, I think ethics in AI training data make a huge difference in how much trust users have.
When I know a model like Firefly is trained only on licensed or public-domain content — and not on my personal work — I feel a lot more comfortable experimenting with it.

It’s not just about protecting artists’ rights, it’s also about creating a sense of respect and collaboration between the tool and the creator. Transparency like this makes AI feel less like it’s “taking” from us and more like it’s “working with” us.

For me, that’s exactly what encourages creativity — knowing the tech I’m using aligns with my own values.

Participant
October 19, 2025

Absolutely agree — trust is everything when it comes to creative tools. When users feel confident that their work and others’ creations are respected, it changes the entire relationship with AI. Models trained on ethical data don’t just protect rights — they foster genuine collaboration and make creators feel part of the process rather than exploited by it. That’s the kind of ecosystem that inspires long-term creativity.

 

Valdair Leonardo
Community Expert
Community Expert
October 7, 2025

Hi Manuel! I suggest you to contact the Adobe Support Team to get more details in your case

https://helpx.adobe.com/contact.html

 

Participant
October 7, 2025

É sem dúvida uma dúvida muito importante e vale muito a pena a reflexão.

Eu mesmo estou neste momento em um dilema, haja vista em meu trabalho ser necessário auditar uma quantidade de arquivos imensa e dentre as várias IA's existentes no mercado, fico receoso de utilizá-las para me auxiliar nesta análise de documentos, pois são documentos sigilosos.

Uma IA que transmite confiança já teria sem dúvida me ajudado a tomar uma decisão e acelerado meu trabalho.

Participating Frequently
October 7, 2025

Great questions, Welder! Yes, knowing that an AI like Firefly is trained on licensed data definitely gives me more confidence in it. That kind of transparency makes me feel safer and more open to using these tools in my creative work.

Valdair Leonardo
Community Expert
Community Expert
October 9, 2025

Great! Thank you so much for sharing your opinion @nulls_2046 

Participant
October 6, 2025

Absolutely — transparency makes a huge difference. Knowing that Firefly is trained only on licensed or public-domain data builds real trust. When creators feel their work is respected, it’s easier to see AI as a true creative partner, not a threat. Ethical AI = more confidence, more creativity.

Participant
October 5, 2025

Absolutely, an ethical approach to training data makes a big difference in how much I trust AI tools. Knowing that Adobe Firefly is trained only on licensed or public-domain content gives me more confidence to use features like Harmonize without worrying about where the AI is pulling inspiration from.

Transparency is key. When I know my personal work isn’t being used to train the AI without my consent, I’m far more open to exploring what it can do. It feels more like a collaboration than a risk.

In my workflow, this kind of clarity makes me more willing to experiment with AI. It shifts the dynamic from “Will this steal my style?” to “How can this enhance my vision?”, which is empowering, creatively.

Participant
October 4, 2025

Absolutely—knowing my work isn’t being used to train the model builds real trust. That kind of transparency makes me much more open to exploring features like Harmonize and seeing AI as a true creative partner.

 
 
Valdair Leonardo
Community Expert
Community Expert
October 9, 2025

Thank you so much for sharing, Ethan 🙂 

Participant
October 3, 2025

Yes, an ethical approach definitely makes me trust AI tools more. Knowing my work won’t be used without consent gives me peace of mind and makes me more open to experimenting, since it feels like the AI is truly assisting rather than taking.

 
 
 
Valdair Leonardo
Community Expert
Community Expert
October 8, 2025

Thanks for sharing your point, John!