Tracy.H • Adobe Employee, Mar 24, 2023Mar 24, 2023
Thanks for suggesting! At this time we are not allowing uploading of images but will consider this for future releases. (Merging all similar threads here so we can track the popularity of this feature request)
Thank you for all your votes and feedback! Style match is now a feature in Adobe Firefly! Check out everything you need to know about this feature here!
The current system only allows for reference images from things we've already generated. Instead, I think it would be beneficial if we could upload our own reference image. Additionally, even in the 'Generative Fill' option, there doesn't seem to be any way to filter our uploaded image, like in the 'Text to Image' feature, where we can tweak the generated image from filters (neon, fur, woody, lighting etc...)
Merging your thread here so we can track the demand for certain features. You can upvote the original post to let us know you want this feature.
I trust this message finds you well. I would like to begin by expressing my admiration for the groundbreaking work your team has been doing with Adobe Firefly (Beta). Your commitment to pushing the boundaries of creativity and innovation has not gone unnoticed, and it's with this admiration in mind that I'd like to propose an innovative feature that I believe would further enhance the platform's capabilities and user experience.
The concept I'm suggesting involves the integration of an AI-driven feature that caters to the application of color palettes to both existing images and AI-generated content. Imagine a feature that enables users to input a color palette, or even a singular thematic color, and have the AI system reinterpret the image in line with the provided palette. This would offer unprecedented creative freedom and flexibility, as well as a level of customization currently unseen in other image editing or generation software.
Such a feature would not only streamline the image editing process, but also open up new possibilities for creatives seeking to implement specific aesthetic designs, facilitate color branding consistency, or explore new artistic styles. This could be particularly useful in the fields of digital art, graphic design, advertising, and social media content creation, where thematic consistency and distinct color schemes play a significant role.
Given Adobe's successful history in incorporating AI technologies, like Sensei, I am confident that the implementation of this feature would be a feasible and worthwhile endeavour. Moreover, this proposed addition could further differentiate Adobe Firefly (Beta) from its competitors and reinforce its position at the forefront of the digital creative landscape.
I fully appreciate that implementing such a feature will undoubtedly pose its challenges, but I firmly believe in Adobe's capability to rise to this occasion. The potential benefits to the creative community and the new possibilities this feature would introduce could well prove revolutionary.
Thank you very much for considering this proposal. I'm excited about the potential for this feature and would be more than willing to provide any further input or clarification that may be required.
Best regards, Darshan Patil
Merging your thread here so we can track the demand for certain features. You can upvote the original post to let us know you want this feature.
I think it would be really cool if I could describe an image and then use a refrence image to help generate parts of the what the ai is deciding to generate. For example, If I wanted to generate a house that looks like something i've taken a photo of
Merging your thread here so we can track the demand for certain features. You can upvote the original post to let us know you want this feature.
Here's an idea, allow users to upload their own images from their computer and allow the AI to learn from those images and offer the users similar or varient images built around the stylization or intial concept image submited from their device.
Merging your thread here so we can track the demand for certain features. You can upvote the original post to let us know you want this feature.
When generating images- If can be allowed to input another image. Example in the below image the watch and headphones are generated. But if i can input what watch to be included/ what headphone based on multiple image inputs. Can imagine this for larger image creation - Example if i am to visualise my car in different locations based on images I have taken of the locations/landscapes with additional elements
Merging your thread here so we can track the demand for certain features. You can upvote the original post to let us know you want this feature.
A part dedicated to the more rigid concept art would be very interesting for the creation of character, environment, object or clothing.
be able to start on a basic schema as an example:
- clothing:
some model pose, body feet etc.. so that the AI can creates only the dress, shoes
- Character and environnemental creation:
A basic schematic where the AI creates a character or a building from the front, back, side, details of the head etc.
A part dedicated to the creation at the base of the creation will allow Adobe Firefly to be oriented more proffesional by offering a tool to help the creation and not only creating by itself
Merging your thread here so we can track the demand for certain features. You can upvote the original post to let us know you want this feature.
i dont know if the development team is working on it or not , but its great to guide AI based on creative line drawing and sketchs layers , both on main structure and every details. for example , while artist is working on character and assets design in photoshop ...
Merging your thread here so we can track the demand for certain features. You can upvote the original post to let us know you want this feature.
I really enjoy the geration capabilities of Fireflies and othe AIs, but for now, they are somewhat of novelties. They produce interesting images and other content, but I find myself wishing I could control the output more to produce the thing already in my mind.
For now:
I would like a way to influence the itterations with being able to select parts/features of the existing photo that I want the AI replicate. Not just remove parts to regenerate, but more ways to be able to encourage the AI in a direction I am envisioning.
In the future:
I imagine a more colaborative environment using existing photoshop tools in which I can cut parts of the image, rotate, drag around, introduce other edits, shades and hues, select specific colors. Essentially, an integrated environment with photoshop or illustrator depending on the application. Fireflies is then able to take my frankenstein image and generate a "clean" version of what I am trying to produce.
I think it would be amazing if we could include visual reference, i.e. upload a reference image with text prompt to further expain the request. I am noticing that the reference that it currently uses is missing accurate visual reference for most types of flowers--so it makes up it's own. Not awesome. ;-/
Merging your thread here so we can track the demand for certain features. You can upvote the original post to let us know you want this feature.