Skip to main content
Inspiring
June 11, 2011
Released

P: GPU & Multiprocessor acceleration

  • June 11, 2011
  • 75 replies
  • 2254 views

It would be great if Lightroom 4 had GPU support for CUDA enabled video cards similar to the Mercury Playback Engine in CS5. That would really speed up performance! Every couple of seconds helps when you are editing 1000s of files.

Saving 2 seconds between images comes out to be an hour of saved time when you are editing 2000 images. I have 12 GB of RAM, and 12 core processors at my disposal and sometimes i have 4 seconds between images.

Multiprocessor support would be great as well!

75 replies

Inspiring
August 31, 2014
It's unfortunate that in the three years since this topic has been started, Lightroom 5.5 is still pretty sluggish even on great hardware. I'm somewhat terrified of how bad it will get when I move to a newer DSLR later this year with images 2x the resolution.

I'll note, though, that when I was using Lightroom on an older Macbook Air, it felt quite a bit faster than my Core i7 desktop, which surprised me. Is the Windows version somehow hobbled compared to the OS X version?
MadManChan2000
Adobe Employee
Adobe Employee
April 9, 2014
This is correct. Most third party raw formats have a linear compressed image data stream, so image decompression is necessarily serial. One exception is the Sony ARW format for which we do have multicore parallel decode already implemented in ACR 8.x / Lr 5.x.

On the other hand, DNGs have a tiling option and currently all DNG images written by Adobe software (e.g., DNG Converter, Convert to DNG on Lr import) are significantly faster to decompress because we can take advantage of multiple cores.
Inspiring
April 9, 2014
Er, no -- the file formats are compressed in a way that does not allow good parallel processing. That has little to do with the implementation, and a lot to do with the third party file formats.
Inspiring
April 9, 2014


I have Wins 7-64, 2 hard disk 7200trs/, intel icore5, catalog and cache on the non OS disk,display 27 inches widegamut, XRite i1pro calibration device, NVIDIA Quadro k600 graphic card. I do not understand why Lightroom does not use the graphic card capabilities to improve how it works and to give some slack to the cpu. The only solutions to optimize Lightroom the solutions are, for the most part, to use less of the capabilities of the software instead of programming it properly or upgrade to i7core. At least give us the option to use the card when we have good one like Quadro series.
Inspiring
April 6, 2014
me again, please speed up importing and converting to dng by processing mutible pictures at once. My CPU utilisation is always between 45 and 65% average 50%. With 3 raw-pictures at a time it would be 100% i think. I use 8 GB Ram (29% are used overall)Quad Phenom II at 3,2Ghz, SSD and USB 3.0 /UHS Card. So certainly no bottleneck on the Hardwareside (aside from the two-core-performance of the CPU).

Also i'd like two hint to the hsa-advantages of the new Kaveri-CPUs from AMD (maybe intel will adopt it in the future?). It doubles the performance when decoding jpeg for example. Maybe your are working on opencl-support anyway so maybe you could implement it too ;). I at least would by the kaveri and the new lightroom that supports it :).

http://www.extremetech.com/computing/...
Diko.bg
Known Participant
January 15, 2014
Additionally to GPGPU features there are some advices for LR speed up that I found here:

http://feedback.photoshop.com/photosh...
Known Participant
January 15, 2014
"...compression methods used in camera RAW files do not lend themselves to parallel processing, slowing the pipeline. "

Sounds like Camera RAW needs some work to open the "pipeline" to more parallel processing via CPU and or GPU. I'm sure figuring out how to take a process that does not lend itself to parallel processing and make it a parallel processing process will be quite the challenge, but is definitely needed.
Known Participant
January 15, 2014
I agree it's time for some significant improvements in performance.

Personally I find the explanation from Adobe that "it wouldn't really help the user experience because our pipeline is different than in Ps...our bottleneck is NOT the display path!" completely laughable. It shows a complete lack of understanding of GPGPU capabilities, and what was done in MPE by the person who wrote that.

So does GPGPU programing solutions do what we need, and does it do it faster than CPU solutions? Yes, and yes. We need on the fly rendering, GPGPU could do that (that is what the MPE does). We need faster initial renderings for imported image previews, GPGPU could do that. We need fast database queries on large databases to get all the parametric information for an image, GPGPU can do that.

Yes, there are some disk access bottlenecks affecting speed, but there are areas that GPGPU could provide some significant benefits.

I believe the real reason the LR hasn't gotten any GPGPU implementations is the lack of any well developed OpenCL or CUDA libraries for LUA (the underlying language of LR). I doubt Adobe will make any effort to develop write a OpenCL or CUDA library for LUA so they could put more effort into putting GPGPU features into LR.
Diko.bg
Known Participant
January 15, 2014
Hi everyone,

I find every LR feature just great, however I believe that it is

cruitial for all of us to get performance.

What LR is still missing is:
- shortcutkeys mapping as in PS (especially an option for keys mapping on custom presets)
- performance boost especially for retouching using LR tools

The latter was topic of this discusion that began 2 years ago...

I found the following post from a DEV (I guess):
http://forums.adobe.com/message/4129233 Post #17

"GPU accelerated screen redraw in PS was a significant effort and
required many months of development and testing."

YEAH! Since this post 2 years passed and we are on LR CC still awaiting it.

The explanation:
"Even if we did that, it wouldn't really help the user experience
because our pipeline is different than in Ps (in Ps the pixels are
effectively baked; in ours, we do all our rendering on-the-fly since
the editing is parametric -- our bottleneck is NOT the display
path!)."

If it is NOT, then please DO optimize the code perfomancewise.
I find it unaccepetable to experience lag on my i7 (3770k OC @ 4GHz
equipped with 2x8GB of fast 1600MHz RAM and SSD) machine with 10 MP
raw files. That is 10MP not 20MP. Let us NOT mention any better
resolution.

Everytime I retouch blemishes and skin with adj. brush I get that
LAG. When I I am editing a single shot - PS is my choice of a weapon. But for some projects I am paid in number of shots delivered. Then the correct tool is LR (or LR alternatives if this isn't fixed soon enough).

It is time for a new and improved LR CC with better performance. In the age of 20MP minimum and MediumFormats and Hi-Res images as industry standart now LR brought the right tools. Now it is time to make those stop being sluggish.
Diko.bg
Known Participant
January 15, 2014
I wonder if this is still true for LR 5.3 I really don't use those as well.

And as I understand it is quite reversible if needed.

Currently my workstation is for an upgrade and I can't check this solution's effectivness. 😞