Skip to main content
Participating Frequently
May 21, 2025
Open for Voting

Gen AI tools going the wrong direction

  • May 21, 2025
  • 16 replies
  • 921 views

I am not happy with the choices and direction that Gen AI is going.  I do not like having my intelectual Property placed into cloud servers from my device and then inserting external material which I view as not appropiate and unethical. I do a remove person with Gen AI and I get a new face inserted that was not at the wedding and no one knows them--is not acceptable.  I do welcome Gen Ai but only when it runs locally and when it only uses materials in my system collection.   For Example, I shoot 3-4000 images at a wedding and if I need to fix backgrounds, faces, dresses, etc I have lots of content to chose from which Gen AI could help with.  Content fill seems to only work within the image that is open.  Have content fill generated across 3000 different images would be great.  I can do the work as I always have copying and pasting manually is the current work flow.  That is what an AI could help with.  That includes, remove, extend, selections.  

 

I run a CLIP AI that trains on my machine all the time.  It is not an editing tool but none the less it does not need a cloud server.  Adobe appears to not be building tools a users can run but instead charge them more for a tool that uses someone else materials and put it into my artwork.  This is WRONG

16 replies

lutzroAuthor
Participating Frequently
May 25, 2025

Thanks for sharing the concerns of Gen AI.  My point in writing the post was I see Adobe making choices that I think are wrong.  First, the cloud is not where I want my artwork.  If photoshop was a web based cloud only service I would be done with it in a heart beat.  Second, I dont need or want a Gen Ai that creates an image for me.  I need tools that make life easier for me to edit my own work.   Thus, content fill is helpful, Gen Ai is not. At least so far it has not show me any value.

 

To your point about the number of images needed, I think is misleading. Gen AI needs millions of images because it meant to be able to create the entire world, all of the people, cultures, fashion, machines, etc.  That is a lots of possiblities.  But if I only need it for just the stuff I am working on then the number is vastly lower.

 

A good example is NVIDIA ChatRtx.  I have a rtx 4090 on my graphics computer and thus the NVIDIA drivers.  the ChatRTX is a free tool you can down load and it installs an AI engine that has several of the most current AI's but CLIP is the one that processes images. It runs against the local content not the Web.  It my sample size about 150 images in a directiry.  CLIP Vectorizes the directory and then allows me to use Chat server on the set.  Identifies, expression, body parts, etc.  Lightroom can do facial rec on whatever you load.  It does not need a huge number of pictures to get pretty good.

 

I am an old guy and it used be if someting came off a dot matrix printer from the computer it must be right becuase the computer said so.  AI is just the new dot matrix printer.

 

Park Street Printers
Known Participant
May 23, 2025

@lutzro 

I think you're making some excellent points. I tend to shy away from generative AI in Adobe programs for the very reasons you give. If I have a library of related images, I should be able to at least have Photoshop attempt to use them to perform generative fill/replace/other tasks, without it automatically resorting to images I don't own. I have hundreds of hand drawn line art illustrations I've done over the years that I would like to be able to train it on. Sure, that might be an insufficient sample size, but I would like to have the option. Having everything default to external servers and external image data that I don't own is a deal-breaker for my illustration work.

lutzroAuthor
Participating Frequently
May 23, 2025

One additional point on my concern.  I do a lot of testing of my tools, and to draw a comparison I suggest you look at ON1 Raw vs Photoshop not that it is better only how it works.  Using ON1 AI like portrait, effects, nonoise, etc they run on the local device and do not suck image back to cloud.  The Photoshop - anything Gen AI or Neural Net, etc is going to get sucked up.  Yes I can disable Gen AI and use content fill but without me actually putting wireshark on and looking at network traffic coming and going I have no faith in how it is being processed.  This needs much better transparacy and the attitude that the cloud is good or better is a problem for me and probably others.

 

lutzroAuthor
Participating Frequently
May 23, 2025

I would suggest that the training set is not necessarily the issue with fingers or hands.  Often GAN discriminators have detailed alogrithms of face construction because we have a ton of face rec research and modeling. Not so with hands, knees , etc. There is a distinct bias in faces as the most recognizble parts of the body.  Some might say fingerprints, or ear lobes or toes.  But GANs have a clear bias on face (or more accurately the human developer training and scoring has a bias)  Same thought on aging body parts -- face/hair usually is first tell.

lutzroAuthor
Participating Frequently
May 22, 2025

I sort of agree with ur assesssment.  But FYII, to say that no parts in gen ai are from training might not be true.  Difference between GAN and Latent Diffusion do build out body parts from learned rule sets and statistical models.  It may start to contruct a hand or nose or whatever but starts with what it has accumulated and iterates.  There is nothing I have seen in research that can show how the model actually makes step by step choices.  It is a black box u put stuff in and stuff comes out and the process inbetween is only guess work.

 

To me is counter intuitive - I do not need to have the tool create some unknown person-- i need to edit he person i photgraphed.  

davescm
Community Expert
Community Expert
May 22, 2025

That is correct, a choice of generators is offered.

The thing with generated content is that it creates that content based on training. It does not replay any images it learnt from. That is why, even after training on millions of images, AI commonly gets hands wrong. It has no concept of a hand, just pixels in images involving hands.

 

To re-use existing content an AI driven content aware fill, which reuses content from existing images, may be better than AI generated content. 

 

Dave

c.pfaffenbichler
Community Expert
Community Expert
May 22, 2025

Thanks for the link! 

 

Quote from the article: 

»While Adobe Firefly can excel in certain areas, and maintains industry-leading commercial safety, many users find that competing models deliver better and more realistic results.«

So it appears to be up to the individual user to decide whether use the commercially safe Firefly or go with one of the other models (even if accessed from the Firefly-interface) – one might get to wonder how many of them will be aware of the distinction …

 

Edit: Just checked and it seems one has to specifically add the other models. 

lutzroAuthor
Participating Frequently
May 22, 2025

Just a slightly different aspect- but I have only experimented with Premier Pro clip extend feature which generates video of the previous few seconds of clip and extends it forward in time.   Conceptionally this using my video and generating new materail from it.  Not pulling from the world just the video.  I suspect this is done by uploading to cloud processing but I do not know for sure.  

lutzroAuthor
Participating Frequently
May 22, 2025

I understand the Adobe announcement was London AdobeMax

 

I saw the it here at Adobe Shockingly Welcomes Third-Party AI Models to the Creative Cloud | PetaPixel

c.pfaffenbichler
Community Expert
Community Expert
May 22, 2025

»For Example, they just annouced the Veo2, Imagen, Flux1.1, DALLE and other third party AI are being added to Firefly- ie most of Adobe tools.«

Could you please post a link to that announcement?