Skip to main content
Valdair Leonardo
Community Expert
Community Expert
September 29, 2025
StickyQuestion

How (AI) Data Ethics Shape Your Creative Workflow?

  • September 29, 2025
  • 36 replies
  • 3875 views

One thing that really stands out to me with Adobe’s Firefly-powered features, especially the new Harmonize feature in Photoshop, is that it’s trained only on licensed or public-domain data, never on our personal projects.  This means the native Adobe AI model in Photoshop (Firefly) is designed to work for you, not from you. In other words, it helps you create without using your personal projects as training data. 

By contrast, other AI models may use different methods and don’t always provide the same clarity, leaving the consent process less transparent.

 

So here are my questions for the community:

  • Do you think an ethical approach to training data changes how much you trust AI tools?

  • Would transparency about not using your work without consent make you more likely to try features like Harmonize?

  • More broadly, how does the way AI models are trained affect your creativity and your willingness to see them as partners in your process?

I’d love to hear how this transparency shapes your willingness to bring AI into your creative workflow, whether you’re using Adobe’s tools or not.

36 replies

Jiimmiey
Participant
February 16, 2026

Transparency definitely changes how comfortable I feel using AI in my workflow, especially when tools like Adobe Firefly inside Adobe Photoshop clearly state they’re trained on licensed or public data, because it removes that fear of your own work being quietly reused; when trust is there, I’m more open to experimenting and letting AI speed up ideation instead of resisting it, and I’ve noticed the same mindset shift while building and testing creative features for side projects like this web apk where transparency around how data is used actually makes people more willing to try and stick with new tools.

Bisharatkhan
Participant
February 16, 2026

This is actually one of the most important conversations around AI right now, especially for creatives.

For me, yes—ethics and transparency directly affect trust. Knowing that Adobe Firefly is trained only on licensed and public-domain content, and not on my personal projects, makes a real difference. It removes that lingering concern of “am I feeding the system my own work without realizing it?”

That’s why features like Harmonize in Adobe Photoshop feel easier to adopt. When the consent boundaries are clear, I can focus on creativity instead of second-guessing how my data might be used later. Transparency doesn’t just build trust—it lowers friction in the creative process.

By contrast, when other AI tools are vague about training sources, it creates hesitation. Even if the results are impressive, uncertainty around data usage can make the tool feel less like a partner and more like a risk.

More broadly, how an AI model is trained absolutely affects how I relate to it creatively. Ethical training and clear communication help position AI as:

  • an assistant, not a replacement

  • a tool that supports ideas, not extracts value from them

When those principles are in place, I’m far more willing to experiment, iterate, and integrate AI into my workflow.

So yes—responsible data practices don’t just shape trust, they actively shape how and whether AI becomes part of the creative process.

ciccariello
Participant
February 9, 2026

That’s exactly where my question diverges.

I understand—and actually appreciate—Adobe’s stance that Firefly is trained only on licensed and public-domain data, and that it explicitly does not learn from our personal projects. Ethically, that clarity matters.

But creatively, it raises a different issue for me:

How do I get Firefly to work primarily from my own visual language—my imagery, my archive, my authorship—without muddying ownership or consent?

In other words, I’m less concerned about Adobe training on my work, and more interested in deliberately training or steering the model with my work.

"Use everything" - poet Gertrude Stein
numb-book
Participant
February 6, 2026

 

Absolutely! Ethical training data is crucial for professional trust. I only use AI-powered creative tools with transparent data practices to protect my work and clients. Transparency isn't optional—it's essential for any serious creative workflow optimization.

KShinabery212
Community Expert
Community Expert
February 2, 2026

I am a big advocate for Ethics in AI.
Which is why I never use Midjourney.
 

Let's connect on LinkedIn. https://www.linkedin.com/in/kshinabery/
Known Participant
January 31, 2026

I’ve seen far too much stolen copyrighted content in Adobe’s stock images, as well as AI images in Adobe stock that look like they’re from other external AI models which do train on unlicensed data. I appreciate that Adobe is attempting to be more ethical here, but they don’t police their stock library enough to ensure it isn’t full of scammers. Ethically built AI that actually compensates the people it’s built off of and keeps what I make with it private and in my control is the only AI I’d consider using, but Firefly isn’t there yet.

Participant
November 20, 2025

Thanks for sharing

Participant
October 21, 2025

thanks for you

Participant
October 20, 2025

better response

Participant
October 21, 2025

thanks

Participant
October 20, 2025

AI data ethics shape your creative workflow by ensuring that the technology you use respects privacy, fairness, and originality. Ethical AI practices help you source data responsibly, avoid bias in generated outputs, and maintain transparency about how AI contributes to your work. By integrating data ethics, you build trust with your audience, protect intellectual property, and create authentic, inclusive, and responsible content.