Copy link to clipboard
Copied
One thing that really stands out to me with Adobe’s Firefly-powered features, especially the new Harmonize feature in Photoshop, is that it’s trained only on licensed or public-domain data, never on our personal projects. This means the native Adobe AI model in Photoshop (Firefly) is designed to work for you, not from you. In other words, it helps you create without using your personal projects as training data.
By contrast, other AI models may use different methods and don’t always provide the same clarity, leaving the consent process less transparent.
So here are my questions for the community:
Do you think an ethical approach to training data changes how much you trust AI tools?
Would transparency about not using your work without consent make you more likely to try features like Harmonize?
More broadly, how does the way AI models are trained affect your creativity and your willingness to see them as partners in your process?
I’d love to hear how this transparency shapes your willingness to bring AI into your creative workflow, whether you’re using Adobe’s tools or not.
Copy link to clipboard
Copied
Transparency in how AI models are trained really makes a difference. If I know my personal projects won’t be used without consent, I feel more comfortable experimenting with features like Harmonize. It creates trust and allows me to see AI as a supportive tool rather than something I need to guard against. In creative work, that sense of ethical assurance often matters as much as the results.
Copy link to clipboard
Copied
Thank you for sharing your perspective, @stromt_9157. A sense of security is an essential pillar for creative work, and insecurity can certainly lead to creative blocks.
Copy link to clipboard
Copied
You make a thoughtful point. Security really does give people the freedom to take risks and express themselves fully, while insecurity often creates hesitation. I’ve noticed the same idea applies in many areas of life, whether it’s in art, work, or even planning smooth experiences with services like seattleblacklimo.com, stability allows the focus to shift toward creativity rather than worry.
Copy link to clipboard
Copied
Great question, Valdair. For me, data ethics directly impacts trust—and trust is what determines whether I’m willing to invite an AI tool into my creative process.
When a company makes it clear that my personal or client projects won’t be repurposed as training data, it lowers the barrier to experimentation. I can explore features like Harmonize without worrying that my unique style or proprietary work is being fed into a massive model that others might benefit from without my consent. That sense of security makes me more comfortable using the tool freely, which ironically leads to more creativity, not less.
On the flip side, when models are vague about their training sources or when consent feels like an afterthought, I tend to hold back. I’ll use them for drafts, tests, or ideation, but rarely for final client-facing projects. The lack of transparency makes me treat the AI as a “sandbox toy” rather than a genuine partner.
So yes—the ethics behind the training data shapes not just my trust, but the depth of integration AI has in my workflow. Transparency turns the tool from something I cautiously test into something I can confidently collaborate with.
Copy link to clipboard
Copied
Thanks a lot for your thoughtful contribution, @Amy_Greenz it really resonates. I especially connect with the way you framed transparency as the difference between treating AI like a “sandbox toy” versus a true collaborator. That metaphor captures the reality of how trust shapes not just adoption but depth of use.
I’d also add that this ethical clarity doesn’t only affect how much we use AI, but what kinds of projects we feel safe bringing it into. Without that foundation of trust, many creatives (myself included) will hold back on client work or high-stakes projects, which means the tool never reaches its full potential in our workflows. With transparency, on the other hand, it’s not just experimentation that grows, it’s confidence, speed, and the willingness to explore new creative directions
Copy link to clipboard
Copied
I really think ethics play a huge role in trusting AI tools. The fact that Firefly is trained only on licensed and public data makes me a lot more confident using it — I don’t have to worry about my own projects being taken without consent. That kind of clarity is rare, and it makes a difference.
With other tools, where the training process isn’t as transparent, I sometimes hesitate because I’m not sure what’s happening behind the scenes. And when you’re doing creative work, that little bit of doubt can hold you back.
So yes, transparency like this definitely makes me more willing to try features like Harmonize and actually see AI as something that supports my creativity rather than something I need to be cautious about.
Copy link to clipboard
Copied
That's a great point of view! I feel the same! Thanks for sharing @walter_0059
Copy link to clipboard
Copied
Yes, an ethical approach definitely makes me trust AI tools more. Knowing my work won’t be used without consent gives me peace of mind and makes me more open to experimenting, since it feels like the AI is truly assisting rather than taking.
Copy link to clipboard
Copied
Thanks for sharing your point, John!
Copy link to clipboard
Copied
Absolutely—knowing my work isn’t being used to train the model builds real trust. That kind of transparency makes me much more open to exploring features like Harmonize and seeing AI as a true creative partner.
Copy link to clipboard
Copied
Thank you so much for sharing, Ethan 🙂
Copy link to clipboard
Copied
Absolutely, an ethical approach to training data makes a big difference in how much I trust AI tools. Knowing that Adobe Firefly is trained only on licensed or public-domain content gives me more confidence to use features like Harmonize without worrying about where the AI is pulling inspiration from.
Transparency is key. When I know my personal work isn’t being used to train the AI without my consent, I’m far more open to exploring what it can do. It feels more like a collaboration than a risk.
In my workflow, this kind of clarity makes me more willing to experiment with AI. It shifts the dynamic from “Will this steal my style?” to “How can this enhance my vision?”, which is empowering, creatively.
Copy link to clipboard
Copied
Absolutely — transparency makes a huge difference. Knowing that Firefly is trained only on licensed or public-domain data builds real trust. When creators feel their work is respected, it’s easier to see AI as a true creative partner, not a threat. Ethical AI = more confidence, more creativity.
Copy link to clipboard
Copied
Great questions, Welder! Yes, knowing that an AI like Firefly is trained on licensed data definitely gives me more confidence in it. That kind of transparency makes me feel safer and more open to using these tools in my creative work.
Copy link to clipboard
Copied
Great! Thank you so much for sharing your opinion @nulls_2046
Copy link to clipboard
Copied
É sem dúvida uma dúvida muito importante e vale muito a pena a reflexão.
Eu mesmo estou neste momento em um dilema, haja vista em meu trabalho ser necessário auditar uma quantidade de arquivos imensa e dentre as várias IA's existentes no mercado, fico receoso de utilizá-las para me auxiliar nesta análise de documentos, pois são documentos sigilosos.
Uma IA que transmite confiança já teria sem dúvida me ajudado a tomar uma decisão e acelerado meu trabalho.
Copy link to clipboard
Copied
Hi Manuel! I suggest you to contact the Adobe Support Team to get more details in your case
https://helpx.adobe.com/contact.html
Copy link to clipboard
Copied
Honestly, I think ethics in AI training data make a huge difference in how much trust users have.
When I know a model like Firefly is trained only on licensed or public-domain content — and not on my personal work — I feel a lot more comfortable experimenting with it.
It’s not just about protecting artists’ rights, it’s also about creating a sense of respect and collaboration between the tool and the creator. Transparency like this makes AI feel less like it’s “taking” from us and more like it’s “working with” us.
For me, that’s exactly what encourages creativity — knowing the tech I’m using aligns with my own values.
Copy link to clipboard
Copied
Absolutely agree — trust is everything when it comes to creative tools. When users feel confident that their work and others’ creations are respected, it changes the entire relationship with AI. Models trained on ethical data don’t just protect rights — they foster genuine collaboration and make creators feel part of the process rather than exploited by it. That’s the kind of ecosystem that inspires long-term creativity.
Copy link to clipboard
Copied
Sorry to hear that @daughtrey_7277
Copy link to clipboard
Copied
Thank you so much for bringing your view, @focused_enthusiasm3800 , yes, especially on websites with forms collecting data it gets even more concerning. 🙂
Copy link to clipboard
Copied
Yes, an ethical approach to training data builds trust—knowing my work won’t be used without consent makes me more comfortable exploring AI features like Harmonize. Transparency like Adobe’s sets a positive standard and makes AI feel more like a true creative partner, not a silent competitor.
Copy link to clipboard
Copied
That’s actually something I’ve been thinking about too.
Copy link to clipboard
Copied
thanks for you
Find more inspiration, events, and resources on the new Adobe Community
Explore Now