How (AI) Data Ethics Shape Your Creative Workflow?
One thing that really stands out to me with Adobe’s Firefly-powered features, especially the new Harmonize feature in Photoshop, is that it’s trained only on licensed or public-domain data, never on our personal projects. This means the native Adobe AI model in Photoshop (Firefly) is designed to work for you, not from you. In other words, it helps you create without using your personal projects as training data.
By contrast, other AI models may use different methods and don’t always provide the same clarity, leaving the consent process less transparent.
So here are my questions for the community:
-
Do you think an ethical approach to training data changes how much you trust AI tools?
-
Would transparency about not using your work without consent make you more likely to try features like Harmonize?
-
More broadly, how does the way AI models are trained affect your creativity and your willingness to see them as partners in your process?
I’d love to hear how this transparency shapes your willingness to bring AI into your creative workflow, whether you’re using Adobe’s tools or not.

