Copy link to clipboard
Copied
Hello.
Using Lightroom Classic 13.1 (up to date version on todays date) when I import all the images from a few photoshoots I apply the Adaptive AI preset "polished portrait" to approx 2,000 images it is taking over an hour. Now I understand these things take time, but I have a powerful computer and when I look at my pc performance during this operation there is nothing much going on! CPU is at around 20%, GPU is just going between 1% and about 20% and there's no other bottlenecks visible, RAM, disks etc are all just idling.
I've paid for and built a powerful machine so these kind of tasks should be faster - why is there nothing in my system maxed out?
Regards
Paul
Copy link to clipboard
Copied
One hour for 2000 images is 3600 seconds for 2000 images, or less than 2 seconds per image. What do you expect?
Copy link to clipboard
Copied
Copy link to clipboard
Copied
While I do not think my following inquiry will help with the develop module application of the preset. 2000 photos is way way too many unless you want to go out to dinner, and hopefully return to see that the mod is accomplished. I do have two observations, and one inquiry..
1. GPU NVIDIA GTX 1660 Super driver 31.0.15.4601 20/10/2023. As you can see that driver (v546.01) is very old. Over at NVIDIA the current driver (Studio) is v 551.23 (WIN 10 or 11). Why is your driver so old?
2. GPU NVIDIA GTX 1660 Super, 6 GB. Released back in 2019. Not so powerful any longer, a tad old.
Q1. What do you have in /preferences/performance/ for Camera RAW CACHE limit?
Copy link to clipboard
Copied
Copy link to clipboard
Copied
Copy link to clipboard
Copied
My Macbook Pro M2 Max (2023), which has a very fast CPU and GPU, takes about 0.75 seconds to compute a Subject Mask. So your observed time of about 2 seconds on a much slower GPU seems normal.
While updating a batch of Subject masks, the CPU is at about 35% utilization, and GPU is about 35%, roughly comparable to what you're observing.
As mentioned by D Fosse, LR has to move the photos pixels from main memory to the GPU's memory (VRAM), do the computation, and then move the result back. That schlepping of pixels takes time too, which isn't measured by Task Manager. Different systems have different CPU and GPU speeds and memory bandwidths, so the utilizations will vary.
Copy link to clipboard
Copied
All that data has to go somewhere. So when it's not doing the math, it's probably shuffling it back and forth to memory and disk via operating system paging.