• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
0

Lightroom 4.2 very poor CPU usage

Participant ,
Oct 24, 2012 Oct 24, 2012

Copy link to clipboard

Copied

Lightroom 4.2 seems reasonably fast when I work with it, whether it's browsing photos or adjusting sliders, although it takes several seconds to go into develop mode after launching it for the first time.

But now I'm exporting 1498 photos that are 5184 by 3456 and it's taking quite a while, I would say about an hour or more. This is on a brand new system I just assembled consisting of an i7 3930k with 32 GB of RAM that flies with every other program. While exporting this batch I opened Task Manager and I noticed that CPU usage never goes to 100%, not even close. There are peaks of 50%, but on average it must be in the 20s:

Lightroom CPU usage.PNG

This is very disappointing on a CPU that has 6 physical cores and 12 logical cores with hyperthreading at 3.2 with turbo at 3.8 Ghz. The batch is exporting these photos from one SATA 6 drive to another SATA 6 drive, and the HD LED barely lights up, so I know the hard drives are not the bottleneck. So I'm wondering, is Lightroom 4.2 really that bad when it comes to taking advantage of the CPU cores? Is there anything I can do to make it use the CPU more?

Thanks,

Sebastian

Views

22.8K

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
replies 105 Replies 105
New Here ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

Not true Bob, I've not owned a Mac for over 20 years and that one was the size of a large toaster.

As I said, the Previews.db and root-pixels.db were right where I expected them to be, but the catalog extension was .lrcat and not .db. When I canged the opener to look for any file, the catalog appeared and I opened it to confirm it was a SQLite db. I must say it seems to have a very complex and confusing structure. A bit of overkill perhaps?

While not as complicated, both previews.db and Pixels-root.db don't seem to warrant an RDB at all.  Especially previews with many levels and indexes, plus a thread number of 4 and a block size of a tiny 4K. Since Rob has experience with the workings of LR perhaps he can comment on the rationale for using SQLite for these two datasets?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

BillAnderson1 wrote:

Since Rob has experience with the workings of LR perhaps he can comment on the rationale for using SQLite for these two datasets?

I had forgotten about root-pixels.db - thanks, so it's actually 3 databases. Granted, root-pixels.db is not much of a database... I really don't know why they used a (separate) database for it, except maybe because it was convenient enough.

I'd have to guess about rationale for preview db too: convenience and fast random access would be my guess.

Summary:

------------

I speculate: once you already have sqlite3 wired in for catalog, it seems ripe for using to store other data too...

I think people sometimes miss: sqlite3 is, for the most part, very fast! It's about as fast or faster than mySQL at most of the things it's used for in Lr. No doubt, that was a main factor when chosen for Lightroom.  There is also potential for slow performance depending on how complex queries / sub-queries are written..., and I've never seen the Lr source code, nor am I a database expert, so I'm really not qualified to rule anything (in or) out for sure - I'm just sayin'...

R

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

I’m sure you’re right Rob; SQLite can be very fast in a single instance running alone with mostly reads and very few writes. That’s what it was designed for in 2000. According to Wikipedia: “the goals of SQLite were to allow the program to be operated without installing a database management system or requiring a database administrator.” Such an easy to use product could be seen as a useful tool to standardize on for all disk I/O, without regard to performance issues that may arise as complexity grows exponentially. There are even cases when transient working datasets might be using SQLite, but I can’t prove that. The problem with using it for datasets such as preview (and for that matter the catalog) is best expressed with this quote, also from Wiki:

“Several computer processes or threads may access the same database concurrently. Several read accesses can be satisfied in parallel. A write access can only be satisfied if no other accesses are currently being serviced. Otherwise, the write access fails with an error code (or can automatically be retried until a configurable timeout expires [could this be the mysterious wait state? -BA]). This concurrent access situation would change when dealing with temporary tables. This restriction is relaxed in version [SQLite] 3.7 when WAL is turned on enabling concurrent reads and writes”

After reading about Write Ahead Logging (WAL) and looking for the telltale “quasi-persistent "-wal" file and "-shm shared memory file associated with each database” and not finding them, I think it’s safe to say it WAL is not part of LR and read/write locking could be a major part of performance issues. Can anyone confirm or deny the use of WAL in LR? Or maybe that question should be posed to Adobe? Perhaps a representative from Adobe could answer some of these questions and save us the time of “speculating”

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

I think the only person at Adobe ever willing to share such information in the open forum was Dan Tull (and even then, it tended to be brief/minimal), and he's moved to another department now. My guess: nobody at Adobe now will be willing to say a word about this, here. (I've personally never seen such tell-tale WAL files and would bet Lr is not using it).

Note: sqlite3 is very fast at writing data too (provided you don't have multiple simultaneous writers...).

For example, in the develop module, when you change a setting, the new setting is saved immediately in the catalog, but no other (Lr native) task is updating the catalog at that time (unless you have some other asynchronouse task running, e.g. as can be seen in progress area), and of course it uses the ultra-fast random access updating methods that RDBs are so famous for. Granted, it is saving undo information and edit-history... as well as current setting. Still, my speculation is that sqlite catalog updating time whilst using the develop module is a small fraction of the total time / lag... (which is probably one of the reasons putting catalog on separate SSD does not significantly improve performance of dev module). Perhaps worth noting: if you enable auto-write xmp, then Lr will re-write an entire xmp sidecar file after each slider change, or brush stroke (or two or three... - if you make them fast enough), and even that does not result in noticeable lag, until you have hundreds of paint-strokes and/or dozens of snapshots, and even then, the additional delay one thinks they notice may be imaginary . Granted, that option doesn't increase database activity, but it does increase disk activity.

PS - I run dozens of plugins (which I have written, and done performance testing of...) that use background updating of Lr catalog via SDK methods, and there can be significant delays in gaining catalog access when there are multiple simultaneous contenders. Not sure where to go with that, but it seemed germaine somehow... Also of interest perhaps: @SDK4, Lr supports a configurable write request timeout for plugin write access functions - I used it for a while, but due to bugginess, have fallen back to the old method for catalog write access contention invented for Lr3: wait a random amount of time then try again (same method used for ethernet access contention).

R

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

PS - I run dozens of plugins (which I have written, and done performance testing of...) that use background updating of Lr catalog via SDK methods, and there can be significant delays in gaining catalog access when there are multiple simultaneous contenders. Not sure where to go with that, but it seemed germaine somehow... Also of interest perhaps: @SDK4, Lr supports a configurable write request timeout for plugin write access functions - I used it for a while, but due to bugginess, have fallen back to the old method for catalog write access contention invented for Lr3: wait a random amount of time then try again (same method used for ethernet access contention).

R

I think these comments are very germane and your comment "@SDK4, Lr supports a configurable write request timeout for plugin write access functions - I used it for a while, but due to bugginess, have fallen back to the old method for catalog write access contention invented for Lr3: wait a random amount of time then try again (same method used for Ethernet access contention)." is loaded with info. As you describe it, it sounds like LR has had a history of contention issues that can affect performance? Is that how you see it?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

BillAnderson1 wrote:

As you describe it, it sounds like LR has had a history of contention issues that can affect performance? Is that how you see it?

Honestly Bill, I don't know what to make of it.

I mean, there are a lot of things Lr native can do fast, that plugins can also do, but are much slower (and some things that are almost as fast).

I just don't know enough to apply what I've experienced updating catalog via plugins, to Lr native (database/sqlite-wise...).

Perhaps it suffices to say that having multiple contenders for write access is probably not optimal, and is probably the reason previews are in a separate database from catalog - so preview writing activities don't interfere with develop writing activities... - anyway, I'm getting pretty far out on a limb at this point, so take with salt...

PS - A note to plugin authors: If you don't handle catalog write access contention errors in your plugins, they won't work reliably for anybody using plugins with background tasks.

John Ellis & Jeffrey Friedl are aware of such issues, and have programmed around it. John Beardy is also aware of this issue but as of a few months ago was yet to remedy (not sure about today - maybe he has enhanced since last I knew). Matt Dawson is also aware of the issue, but I've no idea if he's taken the requisite steps to assure there isn't a problem in his plugins. - not sure about other plugin authors.

A note to plugin users: if you ever have the error - "LrCatalog:with ... was blocked ...", you should contact the plugin vendor and request they enhance their catalog update method, even if next time you try it, the error does not happen.

Rob

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

Rob, you’re not at all out on a limb; you’re describing the symptoms of the problem and offering a suggestion of a possible deficiency. I applaud you for that. I spent the first 20 years of my career deeply in the bowels of the early development of the IBM MVS operating system that is used on large mainframe computers. As processing power grew, memory size grew and disk storage ballooned we had to accept the changes coming from IBM on a monthly basis. The changes came fast and furious and they were always buggy. My brethren and I had to develop a keen sense of where the clues were as to why something wasn’t working the way it should. We’d give our eyeteeth for an astute application developer like you who could offer real world observations. Congratulations.

Given what you’ve said, I am going to go out on a limb and speculate that you’ve kicked the dirt off the core difficulty: LR and its use of the SQLite RDB can create some serious “logical’ errors involving unpredictable response times to occur while waiting for event completion status to be posted. While not exactly the unifying theory of the universe it represents a good starting point.

Adobe really ought to step up and have a look at un-embedding the RDB and moving to a modern one such as MySQL. SQLite was developed in the late 90/early 2000 when multi-processors running multiple threads were a pipe dream. Just sayin’…

Bill

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Apr 11, 2013 Apr 11, 2013

Copy link to clipboard

Copied

Hi Bill

BillAnderson1 wrote:

Adobe really ought to step up and have a look at un-embedding the RDB and moving to a modern one such as MySQL. SQLite was developed in the late 90/early 2000 when multi-processors running multiple threads were a pipe dream. Just sayin’…

Bill

MySQL - initial release May 1995

SQLite - initial release August 2000

Both projects have released new stable versions in the past 3 months, so in light of these facts, I think your point that SQLite is bad because it has a heritage is somewhat diminished. Now, it could be that Adobe have branched SQLite and subtly broken their version or not kept up with the latest updates.

I'm not saying moving to MySQL / SQL Server / Oracle / Postgres / whateverDB would not be a good idea, but I'm sure it would open up it's own new set of issues. Backup regime and local firewalls springing immediately to mind, response time across a network (it would be logical to make Lightroom multi-user if an external RDBMS was used).

Message was edited by: zakb01 (spelling)

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Apr 11, 2013 Apr 11, 2013

Copy link to clipboard

Copied

Zak, the operative word in my recommendation is “un-embed” and just one of the benefits for that would be to abandon their own added code. As you put it “[Adobe might have] subtly broken their version or not kept up with the latest updates." Based on Rob’s description of his problems of timing and bugginess, I’d say they have heavily modified the code and gotten way out of their league. Multi-cores and multi-threads are a very difficult processing environ when database integrity is at stake. There are all kinds of locking and unlocking time issues. That’s why the SQLite people developed WAL into the product, but with many, many caveats. And LR doesn’t even appear to have that working. Bill

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 11, 2013 Apr 11, 2013

Copy link to clipboard

Copied

Bill,

Although I appreciate the kudos, I'm pretty sure I do not know whether flushing sqlite in favor of mysql or another would improve things or not. my guess: it wouldn't (or at least not very much). But you may be right, and it would - I just don't know.

But ya know who does know? - Adobe Lightroom software developers.

Even if they screwed up by choosing sqlite (I'm not convinced), they know it by now, and no doubt have some feel for the impact of replacement.

My thoughts: we should just let them do their jobs, instead of all of this armchair speculation.

I hope I haven't offended, but that's my take on the situation...

A story for those with too much time on their hands:

---------------------------------------------------------------

Once upon a time, I worked on a project which used a Z80 microprocessor, 1MHz clock, 32K ram (yes, 'K'). We enhanced the hardware to run a 80186, 150MHz clock, and 1MB ram, then re-wrote the software to do about the same thing as before (wrote a custom operating system for it, and used more modern software engineering methodology, e.g. C++ instead of PLM/Assembler). Results: did about the same thing as before, only a little bit slower. Oops. Over the course of the next decade we improved performance, but the point is: when you program like you have infinite resources, you will run out of them. I suspect Adobe got into a bit of the same boat with Lightroom - were aiming their sites high in a lot of ways, but let performance take a back seat. Now they have a bit of a beast on their hands. No doubt they'll keep it in check as much as possible over the next decade, but it will probably never be a screamer... - so much for no more armchair speculation... OK, so my (armchair) speculation is this: Adobe would have to thoroughly re-write Lightroom with a new eye on performance, to remedy its (*normal*) slowness. i.e. there is no silver bullet, no single bottle-neck (or small set of bottle-necks) that could be surgically repaired... - but again: this is all idle speculation, granted from a semi-educated (albeit biased by my own experience...) perspective. Worth noting: Lr uses ACR, which means if Adobe thoroughly re-wrote Lightroom to be a star performer, they'd probably have to rewrite ACR and some of Photoshop too.

*abnormal* slowness is a different issue entirely - no comment.

---------------------------------------------------------------

Cheers,

Rob

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Apr 11, 2013 Apr 11, 2013

Copy link to clipboard

Copied

Rob, you’re too modest and that’s the last I’ll say about that.

I’m not armchair quarterbacking this from the perspective of the scruffy haired, tech-geek systems programmer position of my youth, I’m recommend that Adobe make a business decision based on the experience I gained from my Senior Vice President position at a major BofA subsidiary.

The business problem is this: The LR product has been experiencing various and persistent performance problems since the introduction of version 4. This is having a serious affect on customer satisfaction and has earned them a poor reputation in the industry. After exhaustive research it’s been determined that the SQLite product as implemented in its embedded form is a likely source of at least a significant percentage of the problem.

Here are the actionable options for resolution:

  1. Dedicate significant resources to understand the exact nature of the problem that is occurring under-the-covers within the embedded SQLite code and fix them. This would require either hiring outside database administrators and database software engineers or carving off some of the application engineers and retrain them for this job.
    1. Pros
      1. Adobe is in complete in control and is dedicating its own resources to the issue.
    2. Cons
      1. As technology changes with exponential growth of complexity, Adobe will have to dedicate these resources to the never ending evolution of technology. This is not a transitory problem, it will occur over and over again.

  1. Pick a different implementation of SQL such as MySql and convert the existing RDBs to that. Keep in mind that SQL (Structured Query Language) is a special-purpose programming language designed for managing data held in relational database management systems (RDBMS). If Adobe has followed protocols and used the language API properly, conversion of the underlying DB file is relatively painless task and shouldn’t require much expense to implement.
    1. Pros
      1. Adobe’s core competency is developing digital photo editing products and the sum total of their value proposition is the ability to do that better than anyone else. Dedicating resources to anything other than the core competency would be self destructive.
      2. Purchasing a commodity tool whose provider has resources dedicated to keeping their product in front of technology evolution makes for a rational business use of resources.
      3. Further implementation of networked instances of the RDBMS become a reality. E.G Photography businesses could install servers with the centralized RDBMS accessible from anywhere in the world.
    2. Cons: None

I rest my case.

Bill

PS Damn this is fun!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 11, 2013 Apr 11, 2013

Copy link to clipboard

Copied

BillAnderson1 wrote:

Rob, you’re too modest and that’s the last I’ll say about that.

...

I rest my case.

Bill

PS Damn this is fun!

Ok Bill - uncle (fair enough...).

PS - I guess modest people aren't as much fun .

Rule #5: Enjoy!

Rob

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Apr 12, 2013 Apr 12, 2013

Copy link to clipboard

Copied

BillAnderson1 wrote:

lots of stuff

I rest my case.

Bill

PS Damn this is fun!

I can see your core argument: let Adobe focus on what it's good at (images) and rely on a third party whose core competency is databases and hope they are at least as good at that as Adobe is.

If I were in the meeting, I would be asking why your preferred option has no cons? How will we turn MySQL into a consumer ready product (i.e. simplifying configuration, administration, backup)? How much will we need to increase the price of Lightroom? Who will support the users, how much will it cost?   Will minimum system requirements need to change? How will it perform on an entry level spec machine? Do we need to form a relationship with Oracle (the owners of MySQL)? and so on....

Ignoring the potential performance benefits / losses, my guess would be that there would be more support cases due to issues with the database server. Bear in mind the average level of users in this forum is possibly higher than those  who don't post here - and even so some of the issues posted are "I can't find my catalog / images / Lightroom". MySQL is a far more complex standalone product compared to an embedded database like SQLite and I would be concerned that this would create a whole new world of pain - this is why DBAs exist .

Rob Cole wrote:

Although I appreciate the kudos, I'm pretty sure I do not know whether flushing sqlite in favor of mysql or another would improve things or not. my guess: it wouldn't (or at least not very much). But you may be right, and it would - I just don't know.

But ya know who does know? - Adobe Lightroom software developers.

This. Let's face it, SQLite is not the only embedded third party product in Lightroom, for example, the issue could be the embedded Lua interpreter and how it does I/O or interacts with the database (not saying it is or pointing any fingers).

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Apr 12, 2013 Apr 12, 2013

Copy link to clipboard

Copied

Zak, my one page business case typed with two fingers in 15 minutes is meant to illustrate the situation as I suspect it exists. That being said, I stand by “none”.

The context of the case is the discussion of the LR product’s future corporate resource allocation. The target audience would be the executives who are responsible for all of the products in the "The Adobe Photoshop Family": Photoshop and Photoshop Extended, Photoshop Elements and Photoshop Lightroom. The execs have got to make the classic infrastructure support decision. One of the most famous examples of this occurred when Apple (for the third time) switched CPUs from the use of PowerPC microprocessors supplied by Freescale (formerly Motorola) and IBM in its Macintosh computers, to processors designed and manufactured by Intel in 2006.

I’ll say it this way, the exponential growth of complexity has reached the point where a decision has to be made in order to ensure that the future growth of the product line is not hindered by an ancillary infrastructure support system that needs serious attention.

 

Still havin’ fun Bill

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Apr 12, 2013 Apr 12, 2013

Copy link to clipboard

Copied

Let's hope (for the sake of the users having issues) the execs (and techs) are having this discussion and not "what features will we release in LR5" .

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Apr 12, 2013 Apr 12, 2013

Copy link to clipboard

Copied

Taking the Pro-side a bit further, I offer this: I have both PSE and LR and I know for a fact that they have individual embedded copies of SQLite and they don’t share catalogs or previews. If both products un-embedded SQLite and converted to a standalone RDBMS, they could share these and let the RBDMS manage contention and asynchronous updating. Note here: Most standalone RDBMS’ can be made to disappear in the users eyes and moved under the covers. There would be no DBAing needed by the user. Many types of software operate under the covers without user knowledge, JAVA comes to mind. Further moving to a standalone RDBMS doesn’t mean it has to move to a server, it can operate independently in the same machine under a separate PID.

That’s the beauty of running the RBDMS under a separate instance; many applications can share the data and leave the housekeep to a product specifically designed for that purpose.

Carrying this to its natural conclusion, Adobe’s entire Creative Cloud could benefit from this move…

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Apr 12, 2013 Apr 12, 2013

Copy link to clipboard

Copied

Amen.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
May 04, 2013 May 04, 2013

Copy link to clipboard

Copied

To add to the discussion...

I just moved from a Mac Pro 1,1 with dual Xeon Quad Cores, 32GB RAM running Windows 7 Ultimate x64 in Bootcamp under Mountain Lion to a new HP Z620 Dual Xeon Six Core E5-2620 (trying to save some money) 32GB RAM Running Windows 8 Enterprise x64

I was shocked that the performance of LR4x was not at least double that of my previous system. About the only thing I noticed being faster is the Spot Removal tool... otherwise it is about a dead heat.

I understand about batching jobs to increase performance, but, I expected more from single jobs. I have 24 threads and only half are being used to any degree and never at 100% which I expect. All of my other software has taken full advantage of the new cores and threads. Autodesk 3DS Max, V-Ray (inside Max) takes up all 24 threads to 100% immediately on start. HUGE increase in performance while rendering 3D Artwork in Max... I say at least 2.5 times more performance than the slightly upgraded Mac Pro 1,1

So, I want to add my voice to the crowd and look forward to Adobe making full use of the user's hardware if warranted, even if it is with my permission.

BTW, my Boot and Scratch disks are the fastest consumer drives right now: Samsung 840 Pros .. 256GB each. My Data drive is a WD Black Series 2TB model.

Thank you for your insight!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
May 05, 2013 May 05, 2013

Copy link to clipboard

Copied

Gonzalu, yours is a very concrete example of LR4 or LR5s inability to take advantage of multiple processors or even single very fast processors. Read my message number 94 for what I believe to be the reason for this.  http://forums.adobe.com/message/5289225  Bill

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
May 05, 2013 May 05, 2013

Copy link to clipboard

Copied

LATEST

...thanks Bill

And to add. I am also working with Nikon D800 files which are likley the largest files right now of any 35mm dSLR camera. My Catalog is HUGE, yes, over 2GB so if you're corect about SQLlite, we are doomed a Relational Journaling DB is the only way to go then. Regardless, all I would like is a true acceleration of anything that is CPU bound. If calculating the pixels from my image is CPU bound, please, use up all 24 threads... go for it, I don;t mind my workstation being sluggish for toher tasks.. I am ONLY WORKING ON ONE TASK atthe time, believe me..

Or at least give users CHOICES ... give [me] the choice of super fast and slow GUI response or Super Fast GUI response and really poor image developing performance. Oh and make it a slider, not an ON-OFF

Thanks

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

BillAnderson1 wrote:

...

Ok, here’s an example of blazing speed: I just ran this test less than 5 minutes ago. I fired up LR in Library mode; I selected 14 photos that I took this morning, all DNG averaging 30MB; I exported them to a folder on a separate disk. I exported them as JPG with a 1620X960 aspect ratio with no sharpening. It took 1 minute 38 seconds. I shut down LR and deleted the JPGs from the folder. I than fired up Photoshop Elements 11, I had earlier imported the same 14 photos to PSE 11s catalogue, so I selected them for export. I set the export size to the same 1620X960 pixel ratios. The PSE 11 export took 9.4 seconds. The JPGs were pretty much the same in quality and size.

...

Bill

Just curious :

Did you try exporting the original DNG to the final jpeg format from PSE11 and ACR to get a similar workflow ?

By the way, PSE11 does not require a poweful computer and is happy with Windows XP.

Incidentaly, I was doing tests yesterday and found that PSE11 uses two more sqlite databases beside the catalog file and cache (previews) files... Much simpler structure though.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

My photos were the same DNG photos in the same location for both tests.  PSE11 saw them in a watched folder and loaded them in to it's catalog automatically when I started it earlier. So the test used the save source, however PSE11 and LR don't share ACR, so that wasn't part of the test. I believe the test showed that LR is doing something other than converting the DGNs to JPGs and I "speculate" that SQLite plays a part in the CPU usage spike and slow timing. If nothing else, I now know how to efficiently convert my photos to JPGs.

I, too, had no issues with any of the PSEs and I started with PSE4. I switched to LR 4 when I got my D600 and PSE10 wouldn't handle the NEF files. I then got PSE11 and Elements Premium for video.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

BillAnderson1 wrote:

Ok, here’s an example of blazing speed: I just ran this test less than 5 minutes ago. I fired up LR in Library mode; I selected 14 photos that I took this morning, all DNG averaging 30MB; I exported them to a folder on a separate disk. I exported them as JPG with a 1620X960 aspect ratio with no sharpening. It took 1 minute 38 seconds. I shut down LR and deleted the JPGs from the folder. I than fired up Photoshop Elements 11, I had earlier imported the same 14 photos to PSE 11s catalogue, so I selected them for export. I set the export size to the same 1620X960 pixel ratios. The PSE 11 export took 9.4 seconds. The JPGs were pretty much the same in quality and size.

Bill

Further tests on a series of 24 raw .CR2 files (about 8Mb each). Exporting from raw to jpeg full size, high quality : about 50 seconds in LR and PSE. With 1620x960, no change in LR, 14 seconds in PSE. With 640x480, medium quality : 9 seconds in PSE... Default conversion without heavy edits in LR/ACR.

Basic computer, 8Gb, Intel i5 (PSE is a 32 bits application)

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

Excellent sleuthing Michel. What do you make of the results?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Apr 10, 2013 Apr 10, 2013

Copy link to clipboard

Copied

BillAnderson1 wrote:

Excellent sleuthing Michel. What do you make of the results?

I don't know... It's not the thumbnail cache (two sizes : 320 x240 and 160 x120). There are no large previews like in LR. Previewing those raw files in the 'full screen mode' (sort of instant slideshows) is nearly immediate when you change to the next picture, less than 2 seconds when you move to random order in the filmstrip. To get 100% view by clicking on the image, less than 3 seconds and immediate when going back to full screen. Then according to an alleged answer by an Adobe engineer to a customer complaining about the quality of the image before and after the 100% view, the downsizing before is bicubic, while after it's bicubic sharper... Don't forget my raws are only 8Mb with my Canon 20D. Catalog with about 50 000 items in both PSE/LR. Nothing scientific there.

I can confirm that the Organizer uses 4 sqlite databases : the catalog, thumbnail and faces caches plus waldo.cache. Adobe replaced the Access database with the sqlite one with version 6.

Edit:

Would it be the use of a jpeg preview embedded in the .CR2 raw file ?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines