Skip to main content
Valdair Leonardo
Community Expert
Community Expert
September 29, 2025
Question

How (AI) Data Ethics Shape Your Creative Workflow?

One thing that really stands out to me with Adobe’s Firefly-powered features, especially the new Harmonize feature in Photoshop, is that it’s trained only on licensed or public-domain data, never on our personal projects.  This means the native Adobe AI model in Photoshop (Firefly) is designed to work for you, not from you. In other words, it helps you create without using your personal projects as training data. 

By contrast, other AI models may use different methods and don’t always provide the same clarity, leaving the consent process less transparent.

 

So here are my questions for the community:

  • Do you think an ethical approach to training data changes how much you trust AI tools?

  • Would transparency about not using your work without consent make you more likely to try features like Harmonize?

  • More broadly, how does the way AI models are trained affect your creativity and your willingness to see them as partners in your process?

I’d love to hear how this transparency shapes your willingness to bring AI into your creative workflow, whether you’re using Adobe’s tools or not.

38 commentaires

Participating Frequently
October 7, 2025

Great questions, Welder! Yes, knowing that an AI like Firefly is trained on licensed data definitely gives me more confidence in it. That kind of transparency makes me feel safer and more open to using these tools in my creative work.

Valdair Leonardo
Community Expert
Community Expert
October 9, 2025

Great! Thank you so much for sharing your opinion @nulls_2046 

Participant
October 6, 2025

Absolutely — transparency makes a huge difference. Knowing that Firefly is trained only on licensed or public-domain data builds real trust. When creators feel their work is respected, it’s easier to see AI as a true creative partner, not a threat. Ethical AI = more confidence, more creativity.

Participant
October 5, 2025

Absolutely, an ethical approach to training data makes a big difference in how much I trust AI tools. Knowing that Adobe Firefly is trained only on licensed or public-domain content gives me more confidence to use features like Harmonize without worrying about where the AI is pulling inspiration from.

Transparency is key. When I know my personal work isn’t being used to train the AI without my consent, I’m far more open to exploring what it can do. It feels more like a collaboration than a risk.

In my workflow, this kind of clarity makes me more willing to experiment with AI. It shifts the dynamic from “Will this steal my style?” to “How can this enhance my vision?”, which is empowering, creatively.

Participant
October 4, 2025

Absolutely—knowing my work isn’t being used to train the model builds real trust. That kind of transparency makes me much more open to exploring features like Harmonize and seeing AI as a true creative partner.

 
 
Valdair Leonardo
Community Expert
Community Expert
October 9, 2025

Thank you so much for sharing, Ethan 🙂 

Participant
October 3, 2025

Yes, an ethical approach definitely makes me trust AI tools more. Knowing my work won’t be used without consent gives me peace of mind and makes me more open to experimenting, since it feels like the AI is truly assisting rather than taking.

 
 
 
Valdair Leonardo
Community Expert
Community Expert
October 8, 2025

Thanks for sharing your point, John!

Participant
October 3, 2025

I really think ethics play a huge role in trusting AI tools. The fact that Firefly is trained only on licensed and public data makes me a lot more confident using it — I don’t have to worry about my own projects being taken without consent. That kind of clarity is rare, and it makes a difference.

With other tools, where the training process isn’t as transparent, I sometimes hesitate because I’m not sure what’s happening behind the scenes. And when you’re doing creative work, that little bit of doubt can hold you back.

So yes, transparency like this definitely makes me more willing to try features like Harmonize and actually see AI as something that supports my creativity rather than something I need to be cautious about.

Valdair Leonardo
Community Expert
Community Expert
October 7, 2025

That's a great point of view! I feel the same! Thanks for sharing @walter_0059 

Participant
October 1, 2025

Great question, Valdair. For me, data ethics directly impacts trust—and trust is what determines whether I’m willing to invite an AI tool into my creative process.

When a company makes it clear that my personal or client projects won’t be repurposed as training data, it lowers the barrier to experimentation. I can explore features like Harmonize without worrying that my unique style or proprietary work is being fed into a massive model that others might benefit from without my consent. That sense of security makes me more comfortable using the tool freely, which ironically leads to more creativity, not less.

On the flip side, when models are vague about their training sources or when consent feels like an afterthought, I tend to hold back. I’ll use them for drafts, tests, or ideation, but rarely for final client-facing projects. The lack of transparency makes me treat the AI as a “sandbox toy” rather than a genuine partner.

 

So yes—the ethics behind the training data shapes not just my trust, but the depth of integration AI has in my workflow. Transparency turns the tool from something I cautiously test into something I can confidently collaborate with.

Valdair Leonardo
Community Expert
Community Expert
October 2, 2025

Thanks a lot for your thoughtful contribution, @Amy_Greenz it really resonates. I especially connect with the way you framed transparency as the difference between treating AI like a “sandbox toy” versus a true collaborator. That metaphor captures the reality of how trust shapes not just adoption but depth of use.

 

I’d also add that this ethical clarity doesn’t only affect how much we use AI, but what kinds of projects we feel safe bringing it into. Without that foundation of trust, many creatives (myself included) will hold back on client work or high-stakes projects, which means the tool never reaches its full potential in our workflows. With transparency, on the other hand, it’s not just experimentation that grows, it’s confidence, speed, and the willingness to explore new creative directions

Participant
September 30, 2025

Transparency in how AI models are trained really makes a difference. If I know my personal projects won’t be used without consent, I feel more comfortable experimenting with features like Harmonize. It creates trust and allows me to see AI as a supportive tool rather than something I need to guard against. In creative work, that sense of ethical assurance often matters as much as the results.

Valdair Leonardo
Community Expert
Community Expert
September 30, 2025

Thank you for sharing your perspective, @stromt_9157. A sense of security is an essential pillar for creative work, and insecurity can certainly lead to creative blocks.

Participant
October 2, 2025

You make a thoughtful point. Security really does give people the freedom to take risks and express themselves fully, while insecurity often creates hesitation. I’ve noticed the same idea applies in many areas of life, whether it’s in art, work, or even planning smooth experiences with services like seattleblacklimo.com, stability allows the focus to shift toward creativity rather than worry.