Exit
  • Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
  • 한국 커뮤니티
0

Gen AI tools going the wrong direction

Community Beginner ,
May 21, 2025 May 21, 2025

I am not happy with the choices and direction that Gen AI is going.  I do not like having my intelectual Property placed into cloud servers from my device and then inserting external material which I view as not appropiate and unethical. I do a remove person with Gen AI and I get a new face inserted that was not at the wedding and no one knows them--is not acceptable.  I do welcome Gen Ai but only when it runs locally and when it only uses materials in my system collection.   For Example, I shoot 3-4000 images at a wedding and if I need to fix backgrounds, faces, dresses, etc I have lots of content to chose from which Gen AI could help with.  Content fill seems to only work within the image that is open.  Have content fill generated across 3000 different images would be great.  I can do the work as I always have copying and pasting manually is the current work flow.  That is what an AI could help with.  That includes, remove, extend, selections.  

 

I run a CLIP AI that trains on my machine all the time.  It is not an editing tool but none the less it does not need a cloud server.  Adobe appears to not be building tools a users can run but instead charge them more for a tool that uses someone else materials and put it into my artwork.  This is WRONG

Idea No status
TOPICS
Windows
604
Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
16 Comments
Community Expert ,
May 22, 2025 May 22, 2025

»Have content fill generated across 3000 different images would be great.«

What is that supposed to mean? 

Translate
Report
Community Beginner ,
May 22, 2025 May 22, 2025

The 3000 images are my original content from the wedding that I referenced.  I am saying Gen Ai trained on somebody elses copyrighted material is not what I want-- I want the Gen Ai to train itself on my coprighted content and let me use that in my work to make my editing easier and faster.  A fancier way to look at extending content fill across the  mage collection.  When I edit problems in my images I leverage my whole collection of  images and use the content as sources for when parts are needed. Same is true of video - the whole reason to shoot lots of B roll.

Translate
Report
Community Expert ,
May 22, 2025 May 22, 2025

Regarding »copyrighted …«: Did you inform yourself about Adobe’s AI training material? 

Edit: https://helpx.adobe.com/firefly/get-set-up/learn-the-basics/adobe-firefly-faq.html#:~:text=The%20cur...

 

I suspect that 3000 images might constitute an insufficient amount of images to train an image generating AI if one expects fairly decent output.

(Edit: I underlined »suspect« because I am aware that I do not know how many images would constitute a decent basis but based on the coverage of generative AI in the news it seems that volume is an important factor.)

Translate
Report
Community Expert ,
May 22, 2025 May 22, 2025

I agree with c.pfaffenbichler's view that generated AI content would take more than a few thousand images to reliably generate acceptable content.
That said, the idea of being able to have content aware fill draw its content from more than one image is, in my view, a good one. Even two or three images could potentially give a better result than can be done from the image being edited. I would certainly support an expansion to content aware fill on that basis.

As an aside, I had to smile at 3-4000 images from a wedding. Many years ago I used to photograph weddings on roll film in 6x7 format which was 10 on a roll. I can't imagine coming away with 400 rolls to process. How things have changed! 🙂

Dave

Translate
Report
Community Beginner ,
May 22, 2025 May 22, 2025

Thank you for your reply.

1. Yes I have seen Adobes  PR on coprighted training content.  I find it lacking.  For Example, they just annouced the Veo2, Imagen, Flux1.1, DALLE and other third party AI are being added to Firefly- ie most of Adobe tools. How is that compliant with their Traininng Materials statement?

2. 3000 images is lot for training when you only need a narrow scope of employment. Image AI/Facerec have been around a long time (just look at Lightroom).  I run CLIP (That is a French Image AI ) against my training set and that is only about 150 images and it will do better than a crappy 2K resolution of adobes and I am not trying to make an unrecognizble face I am wanting to reproduce the faces I have.  Which makes it easy.  Training time vectorizing directory only takes about 10 minutes.

3. At some point Adobe will have to wake up and decide who they are building tools for.  Artists? or 5th graders.  For example I can not even take an image of the Sistine Chapel and edit it because is has nudity.

 

My point of this thread was I see Adobe making wrong decisions-- I am old school and started on adobe in 1991.  Buying, owning, and installing on my copy.  When they started subscription model and the "cloud" I was not a fan.  I see Adobe trying to drive eveyone into the cloud with the idea that is needed for AI.  It is not.  My toolkit has DXO, ON1Raw, RawTherapee, Darktable, Corel Paint, Anthropics PortraitPro, NVIDIA ChatRTX.  I still think Photoshop is a great tool but it is fast losing ground to these others and their policy and direction on new tech will make them untenable in my mind.

Translate
Report
Community Beginner ,
May 22, 2025 May 22, 2025

I appreciate your thoughts.  I was being kind on only 3-4000 images.  I use two additional shooters using my cameras (2xNikon D850 and Sony A7RV) .  Each Raw image about 60MB.  I get back 6000+ images but many are shot on continuos and easy to drop dups.  Start culling from there to get the 250 or so perfect images. I was an early digital convert.  Just the Darkroom time would take weeks not counting edting with acetate overlays.

 

On ur point of the training set - if you want a whole image generator for any posible scenario would need a very large set. Imagine you need the whole world background, thousands and thoursand of different people, cultures, fashion, seasons, machines, etc. But if the scope of what is being asked for is only at the wedding with the wedding background, and people-- 3000 is way more than adequate.  And my sets are Mpix images not 2K.

 

Glad to meet someone who remembers what film is rather than read about it a history book.

Translate
Report
Community Expert ,
May 22, 2025 May 22, 2025

»For Example, they just annouced the Veo2, Imagen, Flux1.1, DALLE and other third party AI are being added to Firefly- ie most of Adobe tools.«

Could you please post a link to that announcement? 

Translate
Report
Community Beginner ,
May 22, 2025 May 22, 2025

I understand the Adobe announcement was London AdobeMax

 

I saw the it here at Adobe Shockingly Welcomes Third-Party AI Models to the Creative Cloud | PetaPixel

Translate
Report
Community Beginner ,
May 22, 2025 May 22, 2025

Just a slightly different aspect- but I have only experimented with Premier Pro clip extend feature which generates video of the previous few seconds of clip and extends it forward in time.   Conceptionally this using my video and generating new materail from it.  Not pulling from the world just the video.  I suspect this is done by uploading to cloud processing but I do not know for sure.  

Translate
Report
Community Expert ,
May 22, 2025 May 22, 2025

Thanks for the link! 

 

Quote from the article: 

»While Adobe Firefly can excel in certain areas, and maintains industry-leading commercial safety, many users find that competing models deliver better and more realistic results.«

So it appears to be up to the individual user to decide whether use the commercially safe Firefly or go with one of the other models (even if accessed from the Firefly-interface) – one might get to wonder how many of them will be aware of the distinction …

 

Edit: Just checked and it seems one has to specifically add the other models. 

Screenshot 2025-05-22 at 16.41.53.png

Translate
Report
Community Expert ,
May 22, 2025 May 22, 2025

That is correct, a choice of generators is offered.

The thing with generated content is that it creates that content based on training. It does not replay any images it learnt from. That is why, even after training on millions of images, AI commonly gets hands wrong. It has no concept of a hand, just pixels in images involving hands.

 

To re-use existing content an AI driven content aware fill, which reuses content from existing images, may be better than AI generated content. 

 

Dave

Translate
Report
Community Beginner ,
May 22, 2025 May 22, 2025

I sort of agree with ur assesssment.  But FYII, to say that no parts in gen ai are from training might not be true.  Difference between GAN and Latent Diffusion do build out body parts from learned rule sets and statistical models.  It may start to contruct a hand or nose or whatever but starts with what it has accumulated and iterates.  There is nothing I have seen in research that can show how the model actually makes step by step choices.  It is a black box u put stuff in and stuff comes out and the process inbetween is only guess work.

 

To me is counter intuitive - I do not need to have the tool create some unknown person-- i need to edit he person i photgraphed.  

Translate
Report
Community Beginner ,
May 23, 2025 May 23, 2025

I would suggest that the training set is not necessarily the issue with fingers or hands.  Often GAN discriminators have detailed alogrithms of face construction because we have a ton of face rec research and modeling. Not so with hands, knees , etc. There is a distinct bias in faces as the most recognizble parts of the body.  Some might say fingerprints, or ear lobes or toes.  But GANs have a clear bias on face (or more accurately the human developer training and scoring has a bias)  Same thought on aging body parts -- face/hair usually is first tell.

Translate
Report
Community Beginner ,
May 23, 2025 May 23, 2025

One additional point on my concern.  I do a lot of testing of my tools, and to draw a comparison I suggest you look at ON1 Raw vs Photoshop not that it is better only how it works.  Using ON1 AI like portrait, effects, nonoise, etc they run on the local device and do not suck image back to cloud.  The Photoshop - anything Gen AI or Neural Net, etc is going to get sucked up.  Yes I can disable Gen AI and use content fill but without me actually putting wireshark on and looking at network traffic coming and going I have no faith in how it is being processed.  This needs much better transparacy and the attitude that the cloud is good or better is a problem for me and probably others.

 

Translate
Report
Engaged ,
May 23, 2025 May 23, 2025

@lutzro 

I think you're making some excellent points. I tend to shy away from generative AI in Adobe programs for the very reasons you give. If I have a library of related images, I should be able to at least have Photoshop attempt to use them to perform generative fill/replace/other tasks, without it automatically resorting to images I don't own. I have hundreds of hand drawn line art illustrations I've done over the years that I would like to be able to train it on. Sure, that might be an insufficient sample size, but I would like to have the option. Having everything default to external servers and external image data that I don't own is a deal-breaker for my illustration work.

Translate
Report
Community Beginner ,
May 24, 2025 May 24, 2025
LATEST

Thanks for sharing the concerns of Gen AI.  My point in writing the post was I see Adobe making choices that I think are wrong.  First, the cloud is not where I want my artwork.  If photoshop was a web based cloud only service I would be done with it in a heart beat.  Second, I dont need or want a Gen Ai that creates an image for me.  I need tools that make life easier for me to edit my own work.   Thus, content fill is helpful, Gen Ai is not. At least so far it has not show me any value.

 

To your point about the number of images needed, I think is misleading. Gen AI needs millions of images because it meant to be able to create the entire world, all of the people, cultures, fashion, machines, etc.  That is a lots of possiblities.  But if I only need it for just the stuff I am working on then the number is vastly lower.

 

A good example is NVIDIA ChatRtx.  I have a rtx 4090 on my graphics computer and thus the NVIDIA drivers.  the ChatRTX is a free tool you can down load and it installs an AI engine that has several of the most current AI's but CLIP is the one that processes images. It runs against the local content not the Web.  It my sample size about 150 images in a directiry.  CLIP Vectorizes the directory and then allows me to use Chat server on the set.  Identifies, expression, body parts, etc.  Lightroom can do facial rec on whatever you load.  It does not need a huge number of pictures to get pretty good.

 

I am an old guy and it used be if someting came off a dot matrix printer from the computer it must be right becuase the computer said so.  AI is just the new dot matrix printer.

 

Translate
Report