Copy link to clipboard
Where the three little dots are as you float over the Generative image [options] could we include reference image in Generative Fill like Text to Image in Firefly? This would allow the user to get closer to what they imagine.
This would make things a lot easier... I want to have a bullet hit for a film that I am working on... But since that falls into vilence I have been trying to describe the hole in another way.. with very little success...
So photoshop is basically an image editing tool, maximum people use multiple images and work with layers and filters to make these different images look like a part of one composition. With text prompt, it is able to understand some basic things but it would be a great option to also have another option where I can add the image in the scene where I want and it automatically merges with the scene, with all the lighting and textures and shadows.
+1000%! Generating very specific content is a challenge, to say the least. The number of re-rolls required and the processing power must be enormous. Especially once this gets to production and adoption skyrockets. Advocating for this to get around filters is not necessarily a great way to get support. That said, being able to provide a source image could reduce the number of re-renders or trying to find the right way to describe your desired result. Not related, but it would be really great if prompts worked more similarly to MidJourney to allow for natural language or descriptions.
isn't it what I've suggested here a week before?)
In Generative fill there should be an option to replace the subject. For instance, I want to replace a bird subject by a flight.
Also, there should be an option to provide image to text feature in the same. For instance, if I want to create a Birthday card using an image, so I should have an option for better write up which help the user to make things perfect basis the graphics.
Hi @Manish27821306bf7u your first request is currently possible using prompts in Generative Fill. Is something not working for you?
For your second request have you seen the beta for text usage in Adobe Firefly?
Your Gernerative fill is amazing.
My idea for a app would be using AI to its full potential. I have been a photographer all my life, fully digital since 2002. Photoshop has been my tool of the trade since then. I use my photographs in many different ways, my favorite is turning my photographs into Paintings. I currently have apps and actions that do actually amazing things, but the time involved is a disadvantage. Most application and action do a ok job out of the gate but in order to make a professional looking painting, other than painting it yourself, which I enjoy, takes hours and hours of tweaking layers, Sometims over a hundred, sometimes days, Heck I just ought to paint it myself, but with thousands of Photographs I have plans for after 20 digital years it is a bit daunting.
I know Gernrative fill is in Beta. but the resoluton is unusable in its current state. using Generative fill to create and generate a Painting is a great idea but currently the results are dismal. I would suggest that you use your amazing talented associates and programers to develope a stand alone product in Photoshop dedicated to the professional creators who recreate their photographs into Watercolors or oil paintings. Creating high resoulution transformation that are marketable would be amazing.
Currently I use Photoshop to tweak and adjust the work I transform with my apps and it is amazing and the results are professional, so that end of the job is already here and getting better with each update.
One of my Favorite apps is Dynamic Auto Painter. The learning curve for this software is akin to Photoshop. but the results are ok to amazing. Check it out to see what I am talking about.
The marriage of DAP and Generative fill would be amazing.
Let me know what you think.
Playing with the generative Fill.It's fun, and I do like it.However, it's basically a toy without this feature.I've been playing with Generative AI image generation for a while, and you really need 2 things:1) txt-to-img, where you invent something from scratch.2) img-to-img, where you adapt an image according to a prompt.Just having txt-to-img is interesting. I will also credit Photoshop for doing a teriffic job blending to the edges. But, it makes it hard to make any overall changes without overwriting everything.You need to have img-to-img so you can less destricutively modify what you're working on "brighter", "darker", "sunnier". Plus, most of these algorithms have a tuning variable for how much to obey or replace the source. So, maybe all you need is to add a parameter on the generative fill for this "how much to keep or change the source" aspect, where 0 is the same as the current implementation, and 1 attepts to preserve as much of the existing image as possible.But yeah, I was working on this landscape, and I really wanted to make it into a sunny day with strong shadows. No luck without starting over with the current implementation in Photoshop. But I know the functionality exists, because I've tried it with other systems, so it seems like a reasonable addition to the feature.
It would be really helpful in my photography work if Generative Fill could utilize other photos I personally have as references.
Yes that would be awesome. But I think it works off a catalog of generic photos.
In my dream Photoshop world, I could choose to use the general photo pool or choose my own directory of reference photos.
Seriously luvin' tthe generative fill.
(1) I wish there was an option to render without making it fit or blend into the photo. E.g. maybe I just want a goffy looking butterfly that obviously doesn't belong or fit in my photo or creation. Butthat's what I want.
(2) Could I possibly reference objects in the picture? Generative Fill a "Dinosaur busting through the wall" and PS would see that I've selected a wall or part of it and generate the creature in context to and in positioning and angled to the wall.
I am using the latest Beta on the latest Windows
[Mod: Moved to the Photoshop Beta forum]
For the majority, I assume in ref to photographers, the regenerative fill feature will be used to address imperfections in images, caused by mistakes in camera, for example, to expand images. I currently have an image that I wish I had shot wider, I have an image of the same scene shot wider but from a different angle, perspective, decisive moment, etc. When I use the ai feature to expand my image none of the versions it creates are useable. Not even a little bit usable. A contradiction to the ad for this feature much like a Mcdonald's burger ad, what you see is a stark contrast from what you get.
However, if I could add my reference image to the ai essentially showing it what I want just adapted to this specific image, I'm pretty certain that I would receive more workable versions. Starting from scratch every time may be the only option but if the creative has the ability the add reference images this would surely ease the load on the AI servers(?).
I don't have any comprehension of AI learning so this may not even be possible but if it is, please incorporate this feature into the final version
For your first idea, what if you generated a butterfly in a seperate file and then bring it to your working file?
For the second you should be able to do this today though it might change the look of the wall you have in the file. But we are looking into image referencing so I have merged this idea with the reference idea thread.
It would be amazing to have the option of sourcing only your own images for Generative Fill when you want. As amazing as Generative Fill is, it makes my image feel less authentic using other people's images.
[Moved to the Photoshop Beta forum]
YES, thats what I'd like too. As a profesional photgrapher for over 30 years with photoshop experience since the first version came out I have a fair amount of experience and would really value being able to import my own stock images with my copyright so I can use them commercially. I'm hoping this will happen once the beta version is completed. Using my own stock images would also allow me to open the files in their original camera resolution thus improving the quality substantially. Being able to use my own files would also provide a way to use the AI feature within very specific perameters, rather than just using some image the search/prompt feature selects based on text. The text prompt could be used effectively if it could access the metatdata within file info, most efectively in the description field where specific text prompts would make selections directly from the text descriptions within each photo. A GF test prompt could also be directed to an image ID number so it would access a specific image. I love the idea of GF, but only if it uses images created by the artist (or ones they have purchased rights to, or are in the public domaign). I'm not knocking the Adobe Stock selection but it just comes down to the fact that some of my own photos offer me better options, images that would combine more effectly or have more dramatic effect. This would save me a ton of time processing, trying different ideas or versions for an image that could be generated quickly rather than by building layers and masks manually one at a time and finally selecting one to complete to perffection. Thanks Adobe and please take these suggestiuions to heart. Best to all!
Generative Fill DESPERATELY needs the ability to use a reference image. Matching items across images is impossible without it.
I have photos of a model; in one shoot, she was wearing a stage outfit (leather(ette?) dress), made up, hair in a different style and glammed up.
In another shoot, she was in a more casual denim dress.
I am trying to match her to other models from the same casual outfit shoot; the problem is, they all had sunglasses and were facing the camera and she is the only one that they shot without sunglasses or facing the camera. All of her shots are three quarter side.
She DOES face the camera and have sunglasses in the glam shot, though.
I just want to replace the dress she is wearing in the glam shot with the one she was wearing in the casual shot so that I can keep the sunglasses and front-facing view and not have to re-shoot (if that is even possible).
But generative fill utterly fails to use the denim dress samples I provided in any meaningful way and does not seem to know what "faded denim", "stonewashed denim" "light denim" "stonewashed jeans" etc., looks like in any way. It keeps spitting out the same blue denim over and over again.
It's irrelevant any way, because I need the generated dress to actually match the original.
Without the ability to use a reference image, it's a losing battle.
There are many things that Generative Fill does very well, but there's no consistency at all.