Skip to main content
Inspiring
May 22, 2011
Released

P: Better Develop Module Performance

  • May 22, 2011
  • 45 replies
  • 1031 views

On my machine (DELL E6400) adjusting the exposure of an image in the Develop module is more difficult than it should be because of the time lag between moving the slider and the image update.

I suggest a different strategy for generating visual feedback that yields better immediate performance, possibly trading in slight inaccuracies at first, but within a scheme that resolves the latter as soon as possible. Please see the end of this FR for the actual suggestion; I'd like to motivate it first.

It appears that the fixed order in which image operations are applied in the LR pipeline causes some adjustments to result in worse interactivity than necessary. Adjusting the exposure with no other (not even default) image adjustment settings is quick.

With some other image adjustments in place, however -- in particular when using the "Details" panel -- adjusting exposure results in jerky image updates that make it hard to find the correct level accurately and quickly.

I'm assuming that the LR image pipeline mandates that exposure updates happen relatively early, so that sharpening etc. always has to be recomputed after an exposure update. This is why updating the exposure is quick when no other adjustments have to be performed but slow otherwise.

Just to make sure that I'm making the correct assumptions, here's how I arrived at my conclusions:

My initial issue was that updates to the image in the Develop modules seemed to be slow and jerky in full screen mode whereas they were quick and led to smooth updates when I used LR in window mode. It turned out that the deciding factor is the size of the image. Images in portrait orientation are naturally displayed smaller and are less affected by the performance problems. Even images in landscape orientation can show smooth updates in full screen mode, if one artificially decreases their size by making the left hand side panels as wide as possible.

So it seems to me that:
a) LR struggles to deliver a decent image update frequency in the Develop module when the image is above a certain size (at least on my hardware).
b) This performance issue is a problem not only for computation intensive operations, like lens correction, but for simple ones like "exposure" because apparently potentially computation intensive operations, e.g., sharpening/noise reduction, come after exposure in the pipeline and have to be redone after each exposure update.
c) This is a big problem since I'd like to be able to apply some default sharpening on import (or even lens correction for some lenses) and not delay such settings after I have done other image adjustments in the Develop module.

It is clear to me that the user should not be forced to perform edits in a certain order (in addition to the aforementioned problems there is also a common advice to put off spot removals or lens corrections till the end) in order to address LR performance issues.

Here's the actual suggestion:

Could LR not cache all previous adjustments and apply the current image adjustment (e.g., exposure) as the last update in a relative manner?

I realise that this goes against the fixed ordering of the pipeline, but the latter is the problem causing the performance issues. When image operations are commutative they should be applied in the order which provides the best performance (with the use of caching, of course).

I also realise that some image operations (e.g., sharpening and exposure) only seem commutative but may not be completely commutative. It may matter (slightly) whether sharpening or exposure adjustments is performed first. Nevertheless, for the initial visual feedback, it should suffice in most cases to perform the current adjustment as the last and thus be able to capitalise on prior image caching.

Once the user stopped doing the adjustment, i.e., once there is no need to provide immediate visual feedback anymore, LR could recompute the current image using the standard image pipeline ordering. This may in some cases cause a subsequent, subtle change to the image after a short while, but I strongly believe I would prefer that if it meant that my current adjustments would just be a small delta to previous adjustments causing smooth updates rather than causing (almost?) the whole pipeline to be re-triggered.

Note that the proposed scheme would address the concerns regarding inaccurate image renderings in the Develop module. Noise reduction could always be performed, at all magnification levels, if it only impeded the speed of the final rendering -- after direct interactivity is no longer needed -- but would not impact on the interactivity of image adjustments that become before noise reduction in the image pipeline.

The proposed approach may even mean that using the adjustment brush on images with a high number of edits (including a multitude of previous brush strokes) may become more interactive. Currently, the lag between moving the mouse cursor and seeing the effect of the brush, becomes larger the more edits an image has. This may just point to caching potential regarding brushes (I hope not all strokes are always redrawn), but may also be related to the pipeline issue.

45 replies

Sean H [Seattle branch]
Known Participant
October 10, 2012
Adobe should have something similar, built-in & transparent to the user. At normal screen res, show me a proxy that's optimized for my workspace. If I zoom in, pull up the RAW and give me more res. By the time I'm in the Dev module, I've already made my picks so I don't actually do much pixel peeping at that stage. Others might, but that might be a flaw in their workflow. I've advocated face detection for intelligent 1:1 zooms, too. If this was applied, Lightroom could zoom in on the faces (not ID the faces, just ID that they are faces) and show me that subsample'd region of the raw, not the full raw with panning. Right?

Your software is brilliant. I used it on about 50 images, but the conversion back to hi-res didn't load the data. I had to manually load it. Also, I don't see that flags sync meaning that I can't do further filtering while developing my files which is quite frequent. Also, on the Hi->Lo, I get a LR dialog for each selected file saying "Lightroom is concerned". So, for a 2k set, you can imagine that not working out too well. I'm probably not doing it right, though.
[ ◉"]
areohbee
Legend
October 10, 2012
Thanks Dan, for participating and informing...

I have found that the time from selecting the next image (when not in negative/ram cache) until sliders enabled depends more on resolution than format, whereas time from sliders enabled until loading indicator extinguished depends more on format. This is in agreement with Sean's report above.

After my initial experiments with "fast-loading" DNG, (when I didn't know it was possible to reduce the resolution and still maintain raw editing characteristics), I abandoned the EditInLightroom project, since it wasn't helping with the most critical lag (time until sliders enabled). Moral of the story - it was the resolution drop that helps enable those sliders more quickly.

|> " If there could be a way for me to queue up some job to build out these renders while I sleep so I can skip them while I edit, that would be HUGE time savings."

This is precisely what EditInLightroom allows you to do (granted, you lose the ability to zoom 1:1). Is it not helping? Or, are you simply asking Adobe for similar help, built-in & smooth...

Rob
Sean H [Seattle branch]
Known Participant
October 10, 2012
I moved 10 NEF files to my SSD which has a sequential, uncompressed read of about 500MB/s. Extremely fast drive, same or faster than a striped array of raptors especially with non-sequential stuff. No difference in the delay. This is all CPU bound. It does render faster with smaller screen sizes in both cases (SSD & 3TB barracuda spinning disk)

But I had another question before: What's the difference between the ACR cache version of the raw file that I made in the Library module and the render version that gets made when I choose the raw file in the Develop module? Is it a DNG versus a JPG? more data?

I'm basically asking this stuff because I do about 30 jobs a year that require 2,000-3,000 photos each so probably 100,000 images per year when you add in odd jobs. If there could be a way (LR 5) for me to queue up some job to build out these renders while I sleep so I can skip them while I edit, that would be HUGE time savings. That's what I do now for the 1:1 in the Library module for pruning, but you are alluding that this 1:1 cache build is ignored for some good reason when I hit the Develop module. I don't see why but I'm ignorant on the mechanics here.
[ ◉"]
Participating Frequently
October 10, 2012
The Library case just has to scoop the bits off disk, decompress the JPEG, and blit assuming the preview is up to date.

The Develop case is actually grabbing the raw data (possibly short circuiting the early stages of the pipeline via the ACR cache, though that's not usually a huge savings, honestly) and performing the render right at that moment.

At your screen resolution, it's probably using the 1:1 pyramid level, too, which means any sharpening and noise reduction settings are more expensive to boot. Having now seen your resolution info now, I'm not too surprised that the DNG fast load isn't helping (sorry I didn't reply soon enough to save you the DNG experiment). That preview is capped at 2K on the long edge, IIRC, so in your case it's probably falling back to doing a full render anyway.

There's another DNG load speed boost, but it only applies if you're reading them from a really fast disk (striped 10K raptors fast) and have lots of cores, but I got some mixed results if I varied the setup at all (it didn't work very well on Windows, either for whatever reason) and ran that experiment quite some time ago, so I wouldn't bank on that one, either.

The scenarios that might be interesting to compare would be:
- what's the load time if the images are on a faster disk?
- what's the load time if the images have default settings (no sharpening or noise reduction, especially)?
- what's the load time if you reduce your window size so that the on screen display is less than 1/4 of the total resolution of the image?

The latter two would indicate the problem is bound by the rendering, the former indicates that I/O is holding you up.
Sean H [Seattle branch]
Known Participant
October 10, 2012
I concur with Rob-- the DNG development responsiveness was no faster than NEF. In fact, I am seeing a similar delay with JPG development...

Camera: 16MP Nikon D4
Computer: Mac Pro 2010, 3.33GHz 6-core, 24GB RAM, Catalog on 240 OWC Extreme SSD (550 read/write uncompressed sequential on PCIe 6G SATA), images on 3TB Seagate Barracuda (150 read/write)

Full screen 2560x1440, just dev panel open about 350px wide on right, same 11 horizontal images
NEF = 1.3, 1.5, 1.3, 1.4, 1.3. 1.6, 1.4, 1.4, 1.3, 1.5, 1.5
JPG = 1.2, 2.3, 1.6, 2.0, 1.6, 1.9, 1.9, 1.8, 1.9, 1.8, 1.8
DNG = 1.4, 1.7, 1.9, 2.0, 2.3, 3.1, 2.2, 4.2, 2.3, 1.9, 1.9

So, DNG is actually a bit slower on my system (some cases much slower). DNG also continues to render the image while the sliders are available contributing to feedback lag between slider adjustment and image adjustment.

Now, compare that to a 1:1 cache pull in the Library module which is pretty much 0.1 seconds from sharp render to the next sharp render. Dan, I'm not seeing why a 1:1 jpg render in cache isn't the same as the Development Module render that's presented. What's the difference? I don't see any compression artifacts in either. I really, really want a preferences option to allow me to bypass that re-render. What about an option for those of us that could stuff 64-128GB of ram in our system to preload all of the NEF/DNG/JPG files into RAM? Wouldn't that make Lightroom fly?
[ ◉"]
Sean H [Seattle branch]
Known Participant
October 10, 2012
I'm going to try DNG again. I back up my 'virgin' NEF files to backblaze cloud and blu-ray before touching them in Lightroom.
[ ◉"]
areohbee
Legend
October 10, 2012
So, any idea what caused the corruption? - any reason to believe if you try again the same thing won't happen again?...
Sean H [Seattle branch]
Known Participant
October 10, 2012
The DNGs were stored on a Drobo so stored redundantly on many disks at once.
[ ◉"]
areohbee
Legend
October 9, 2012
|> " I'll give the DNG a try..."

My experience: raw DNGs (w/fast-load data) load for develop no faster than raw NEFs (with ACR cache entry). But, that's just one data point. Please do report back after you try it.

|> " I had tried it in prior versions but found a ton of DNG corruption across a year's worth of work files..."

That's disturbing. Do you have any idea what caused this? Were they stored on optical media or a wonky disk or something? (it's very rare for corruption to be caused by Lr writing the DNGs).

Rob
areohbee
Legend
October 9, 2012
|> " Most likely it is grounded in a (somewhat puritanical) mindset of ensuring Develop view only shows 100% genuine pixels or somesuch."

I can't help but think a part of the reason is that Lr's dev module is essentially a thin veneer over the same ACR used in Photoshop (which does not share the code to access Lr's previews...). So, there may be a ACR code-sharing thing too... - just guessing.

|> " I do know that DNG files created with more recent versions of Lightroom embed a proxy preview (I don't recall the name) that can accelerate Develop view loading significantly."

Could the name be: "fast-load data"? My experience: marginal improvement. about the same amount as ACR cache (~15% faster), making me think it is just housing the ACR-cache data internally. Again, this is speculation, albeit based on experience...