• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
105

P: GPU & Multiprocessor acceleration

LEGEND ,
Jun 10, 2011 Jun 10, 2011

Copy link to clipboard

Copied

It would be great if Lightroom 4 had GPU support for CUDA enabled video cards similar to the Mercury Playback Engine in CS5. That would really speed up performance! Every couple of seconds helps when you are editing 1000s of files.

Saving 2 seconds between images comes out to be an hour of saved time when you are editing 2000 images. I have 12 GB of RAM, and 12 core processors at my disposal and sometimes i have 4 seconds between images.

Multiprocessor support would be great as well!

Idea Released
TOPICS
macOS , Windows

Views

621

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
75 Comments
LEGEND ,
Apr 06, 2014 Apr 06, 2014

Copy link to clipboard

Copied

me again, please speed up importing and converting to dng by processing mutible pictures at once. My CPU utilisation is always between 45 and 65% average 50%. With 3 raw-pictures at a time it would be 100% i think. I use 8 GB Ram (29% are used overall)Quad Phenom II at 3,2Ghz, SSD and USB 3.0 /UHS Card. So certainly no bottleneck on the Hardwareside (aside from the two-core-performance of the CPU).

Also i'd like two hint to the hsa-advantages of the new Kaveri-CPUs from AMD (maybe intel will adopt it in the future?). It doubles the performance when decoding jpeg for example. Maybe your are working on opencl-support anyway so maybe you could implement it too ;). I at least would by the kaveri and the new lightroom that supports it :).

http://www.extremetech.com/computing/...

Votes

Translate

Translate

Report

Report
LEGEND ,
Apr 08, 2014 Apr 08, 2014

Copy link to clipboard

Copied



I have Wins 7-64, 2 hard disk 7200trs/, intel icore5, catalog and cache on the non OS disk,display 27 inches widegamut, XRite i1pro calibration device, NVIDIA Quadro k600 graphic card. I do not understand why Lightroom does not use the graphic card capabilities to improve how it works and to give some slack to the cpu. The only solutions to optimize Lightroom the solutions are, for the most part, to use less of the capabilities of the software instead of programming it properly or upgrade to i7core. At least give us the option to use the card when we have good one like Quadro series.

Votes

Translate

Translate

Report

Report
LEGEND ,
Apr 08, 2014 Apr 08, 2014

Copy link to clipboard

Copied

Er, no -- the file formats are compressed in a way that does not allow good parallel processing. That has little to do with the implementation, and a lot to do with the third party file formats.

Votes

Translate

Translate

Report

Report
Adobe Employee ,
Apr 09, 2014 Apr 09, 2014

Copy link to clipboard

Copied

This is correct. Most third party raw formats have a linear compressed image data stream, so image decompression is necessarily serial. One exception is the Sony ARW format for which we do have multicore parallel decode already implemented in ACR 8.x / Lr 5.x.

On the other hand, DNGs have a tiling option and currently all DNG images written by Adobe software (e.g., DNG Converter, Convert to DNG on Lr import) are significantly faster to decompress because we can take advantage of multiple cores.

Votes

Translate

Translate

Report

Report
LEGEND ,
Aug 31, 2014 Aug 31, 2014

Copy link to clipboard

Copied

It's unfortunate that in the three years since this topic has been started, Lightroom 5.5 is still pretty sluggish even on great hardware. I'm somewhat terrified of how bad it will get when I move to a newer DSLR later this year with images 2x the resolution.

I'll note, though, that when I was using Lightroom on an older Macbook Air, it felt quite a bit faster than my Core i7 desktop, which surprised me. Is the Windows version somehow hobbled compared to the OS X version?

Votes

Translate

Translate

Report

Report
Adobe Employee ,
Sep 01, 2014 Sep 01, 2014

Copy link to clipboard

Copied

Are you using a SSD on your Windows machine?

Votes

Translate

Translate

Report

Report
LEGEND ,
Sep 01, 2014 Sep 01, 2014

Copy link to clipboard

Copied

Who are you asking this question to Eric - me? If so, yes, I have my OS on one SSD and my Lightroom catalog and all the images in that catalog on another SSD. I even created a 12GB RAMdrive this weekend and loaded my entire catalog into that. Slight speed increase in some functions, but the worst slowdowns are still there and are seem to be CPU and software-bound.

I can only describe it as...latency. It's like it takes the program 1-2 seconds so start processing the development settings on an image before it will complete them and show you the image.

Votes

Translate

Translate

Report

Report
Contributor ,
Sep 03, 2014 Sep 03, 2014

Copy link to clipboard

Copied

My basic understanding is that performance isn't changing (or is getting worse) because they are optimizing at the same pace as they are adding great features. Really, these features are *almost* on par with PS. The more adjustments you apply serially to an image in the Dev module, the longer it takes for them to be applied EACH time you land on the image.

With 32-64GB of ram, couldn't LR pre-render a ton of raw files in the develop module using idle CPU cores while I do adjustments on a file? Or could I prerender a set then come back and work from RAM?

Or, can't you guys give me proxy DNGs like Smart Previews where I'm loading and executing on small files then applying to full DNG on export?
// Sean //

Mac Studio Ultra • 128GB RAM • 48-core GPU • 2TB SSD • 32" LG OLED • Nikon

Votes

Translate

Translate

Report

Report
LEGEND ,
Sep 03, 2014 Sep 03, 2014

Copy link to clipboard

Copied

Once upon a time, Lr kept the last 4 renderings for develop in what was called the "negative" cache - selecting any of the last 4 images in develop module was instant (no loading - ready in ~0.0 seconds). Now, it only keeps the last one. My sense is that it was a bother for them (kinda like concurrent rendering for slideshow), and so they nix'd it.

+1 for resurrection (with ability to do "look-ahead" instead of just "remember-behind"), with some good scheme for controlling ram consumption (currently Lr just uses virtual memory (disk as ram) when out of real memory, nearly always bringing the entire system down when that happens - not a good scheme in my opinion).

Votes

Translate

Translate

Report

Report
Contributor ,
Sep 03, 2014 Sep 03, 2014

Copy link to clipboard

Copied

Yeah, I'd like Adobe to stop catering to the least common denominator. I just put together a massive system that blows away new mac pros and it's still not that impressive in LR. Make the min specs be a top end machine with 32GB+ ram and optimize for the professionals who use this software to chew through literally 100k images a year, not the weekend warrior crowd that want to get by with celeron + 1GB ram Dell machines using 5400 rpm drives and 15" monitors. Sure, it would make the software cost more as they'd lose a great deal of their install base, but I'd pay thousands for software that actually was intelligent enough to utilize my hardware vs putting it to waste.
// Sean //

Mac Studio Ultra • 128GB RAM • 48-core GPU • 2TB SSD • 32" LG OLED • Nikon

Votes

Translate

Translate

Report

Report
Contributor ,
Sep 03, 2014 Sep 03, 2014

Copy link to clipboard

Copied

Amen Sean. I've not commented on these forums for three years, and overall LR performance has waned. I'm routinely generating thousands of RAW files per day on many assignments and all my ongoing hardware investments can't compensate for LR's sluggish performance. All the wasted seconds add up, and clients want even faster turnaround (social media for starters).

Call it Lightroom Pro... charge more and I'll happily pay for it with no new features just 3X-5X performance.

Votes

Translate

Translate

Report

Report
Participant ,
Sep 03, 2014 Sep 03, 2014

Copy link to clipboard

Copied

Right on Sean and Michael. I don't know what the solutions are but I'd pay a lot more for LRPro as well. Importing, screen redraw, fast switching between images in the Library and Develop modules without interim previews are pain points for me. I've super fast cards, card readers, the latest hardware, SSDs, etc and the bottleneck is LR. Capture One and PhotoMechanic have clear advantages in these area - not that I want to use them but they prove the potential for this to work better. Would love to provide further feedback and help test...

Votes

Translate

Translate

Report

Report
Contributor ,
Sep 03, 2014 Sep 03, 2014

Copy link to clipboard

Copied

I think one of the devs commented before about GPU support and how it would be hard to make it work with many systems. So.... have certified GPUs for the pro version of LR: I have dual R9 280x which are OpenCL beasts.

BTW... http://www.musemage.com/ <-GPU
// Sean //

Mac Studio Ultra • 128GB RAM • 48-core GPU • 2TB SSD • 32" LG OLED • Nikon

Votes

Translate

Translate

Report

Report
Contributor ,
Sep 03, 2014 Sep 03, 2014

Copy link to clipboard

Copied

Sean to your point about volume. I've shot approx. 350,000 images since 2011, all RAW. I'd have no problem complying with a short list of certified hardware... it's that important.

LR began as a stealth development effort to address the workflow demands of high volume shooting, that raison d'etre seems to have been lost along the way. These threads stand as sorry testimony to the lack of progress in this critical area.

Votes

Translate

Translate

Report

Report
Contributor ,
Sep 03, 2014 Sep 03, 2014

Copy link to clipboard

Copied

I don't know, like I said, they've continually added new features such as perspective/lens correction which are actually useful. The adjustment brushes, clone tools are quite close to perfect though I'd rather have PS-like healing vs LR's version. That said, it seems like a PERFECT time for LR to really hit the 'optimize' button and dive deeper into parallelizing and look-ahead rendering like what you see in FCPx when you idle the mouse. (in FCPx my CPU and GPU are rarely idle but the UX is snappy). I mean, look at a typical 1000-shot export where you can export 100% in 10 minutes, 500/500 (parallel) in 5 minutes and 333/333/333 in 3 minutes. Sure, the 3-up split does practically lock up the CPU but I'm making coffee. This has been exposed and talked about for years.... give me the TURBO button!
// Sean //

Mac Studio Ultra • 128GB RAM • 48-core GPU • 2TB SSD • 32" LG OLED • Nikon

Votes

Translate

Translate

Report

Report
Contributor ,
Sep 09, 2014 Sep 09, 2014

Copy link to clipboard

Copied

These guys get it:
http://www.phaseone.com/en/search/art...

http://blog.phaseone.com/processing-s...
// Sean //

Mac Studio Ultra • 128GB RAM • 48-core GPU • 2TB SSD • 32" LG OLED • Nikon

Votes

Translate

Translate

Report

Report
New Here ,
Oct 09, 2014 Oct 09, 2014

Copy link to clipboard

Copied

HoHoHo !!! So !!! It is very hard ... HaHaHa. Learn from this company.
It seems that Adobe is struggling !!!

Votes

Translate

Translate

Report

Report
Explorer ,
Oct 09, 2014 Oct 09, 2014

Copy link to clipboard

Copied

Actually there are some simple improvements which could implemented very easily. For example the import of photos does not use all available resources.

Importing e.g. 1200 RAW images, converting them into DNG on a Mac Pro with 6 Cores, 3.5 GHz and 32 GB RAM, from a SD Card into a RAID Array ... you would simple expect a quite quick progress.

But Lightroom is slow. It doesn't make use of the RAM, it also doesn't make use of the CPU power. It seems it processes each image in a sequence, instead of processing 8-16 Images in parallel. It's frustrating to see that there is no actual bottleneck, just Lightroom which does process the images sequentially. It could also start generating the previews after the import, but no... the preview generation starts _after_ the import of all images.

- A big issue on import. While import you can not work on Lightroom, so this slowness blocks actually your workflow.
- Another big issue on export. It takes so much time to export e.g. 100 images. At least here it doesn't block your workflow.

If at least Lightroom would process multiple images in parallel, it would compensate for the lack of multithreading in the processing itself.

Votes

Translate

Translate

Report

Report
Participant ,
Oct 10, 2014 Oct 10, 2014

Copy link to clipboard

Copied

I completely hear you about the need for multiple import sessions. As we speak/type I'm importing ~2000 images and waiting to start another session of another 2000+ images. If I could just start them both it would be a huge time saver for me. As it is I need to stick around at the studio another 35 minutes so I can start the second session and then head home.

A know a lot of people that would like to be able to start one session with one set of parameters (file renaming, keywords, etc) on a group of images, and then immediately start a second import session on a second group of images.

Your card reader could be a bottleneck, but either way multiple import sessions (from several card readers if necessary) could help!

Votes

Translate

Translate

Report

Report
Community Beginner ,
Mar 19, 2015 Mar 19, 2015

Copy link to clipboard

Copied



I just purchased a powerful laptop with hopes of taking advantage of the dedicated GTX 860m GPU white using Lightroom 5 instead of the Intel CPU graphics. The laptop uses Nvidia Optimus technology and I have a feeling that Lightroom 5 isn't an application profile that is supported yet. Photoshop does appear to be supported in this situation though.
Does anyone have insight into this issue? Below are the steps I've taken in Windows 8.1.

-Open Nvidia Control Panel
-Select Manage 3D Settings
-Select Program Settings tab
-In 1st Drop down field, Adobe Lightroom is not available (Adobe Photoshop is)
-Clicked 'Add', selected lightroom.exe.
-Switched 2nd drop-down field to "high-performance Nvidia Processor"
-Clicked apply
-Opened Lightroom 5 and the Dedicated GPU did not turn on.

I applied these same steps for Adobe Photoshop and everything worked fine.

I spoke with an adobe tech support rep and he mentioned that Lightroom hasn't implemented support for this yet? Why would photoshop be supported and not Lightroom?

Windows 8.1
Intel i7-4720 cpu
Nvidia GTX 860m 4GB gpu
8GB DDR3 memory

Votes

Translate

Translate

Report

Report
New Here ,
Apr 18, 2015 Apr 18, 2015

Copy link to clipboard

Copied

Even if you don't do it for format loading, Sony had some great success with image (really video) processing using OpenCL http://on-demand.gputechconf.com/gtc/... 🙂

Votes

Translate

Translate

Report

Report
Participant ,
Mar 28, 2016 Mar 28, 2016

Copy link to clipboard

Copied

Does that mean that if work with DNG files (converted from CR2) this would   improve the visualization of images and ergo their edit (especially a heavy local adjustment speed)? Challenge accepted. Will come back with a verdict.

Bobobo168, Video as weird as it sound as 5 times faster with adobe. Images are another story.

Votes

Translate

Translate

Report

Report
Participant ,
Mar 28, 2016 Mar 28, 2016

Copy link to clipboard

Copied

IMO and according to my experience it's solely CPU.

Votes

Translate

Translate

Report

Report
Participant ,
Mar 28, 2016 Mar 28, 2016

Copy link to clipboard

Copied

ОК. Just checked it. Re-imported files as DNG files with 1:1 Preview.

Example of lack whenever using the Spot removal (heal) for more than, let's say 3 times, the next is SLOW. Better don't ask about fixing times on eyes sharpening and/or skin under the eyes with the adjustment  brush. Noticeable lag even on RAWs from Canon 40D which is with 10 Megapixels.

Guys, you gotta be kiddin' me! Do I need to explain that I now work with 50 megapixels and what that means to me?

My rig is: i-7-6700K @ 4 GHz, 32 GB RAM @ 2400 MHz - OS and scratch disk on EVO Pro 840 500 GB SSD + Black Caviar 1 TB for catalog and actual images. GTX 970. Win 10x64 bit. Current catalog set on 3 GB Video, 88 GB RAW cache.

PS: I have tested on the SSD no change. Constant utilization of CPU from LR whenever calculating a local adjustment or similar (observing it on Process Explorer).

What excuse do you have NOW???? Please let me know. 

Jeffrey Tranberry, I am complaining about this and also about the lack of custom shortcut keys feature for quite some time already. Considering migration to Capture One. Currently in workflow adoption and testing phase.

Eric Chan, My issue is not with the your department, I guess. Whoever works on the local adjustments should improve performance-wise dramatically!

Votes

Translate

Translate

Report

Report
LEGEND ,
Mar 30, 2016 Mar 30, 2016

Copy link to clipboard

Copied

LATEST
Vote for this: 
https://feedback.photoshop.com/photoshop_family/topics/lightroom-preload-in-develop-module

I thint this is an easy solution for the 2 second delay that can bee seen on every machine.

Votes

Translate

Translate

Report

Report