Skip to main content
Participant
July 14, 2025
Question

Apple Silicon chip impact on Lr Classic 14.4 Adaptive Color and AI Denoise performance

  • July 14, 2025
  • 1 reply
  • 1242 views

I'm currently running Lightroom Classic on an M1 Mac mini with 16 GB of RAM, and I find the the things that cause me to wait for the computer are Lightroom Classic 14.4 computing AI denoise and the Adaptive Color profile.

 

Are there public benchmarks that show how various Apple Silicon chips perform on these operations (e.g. with M1 as 1.0 and something twice the speed as 2.0, etc.)?

 

I'm particularly interested in what kind of performance factor M4 buys compared to M1 and what M4 Pro buys compared to M4.

 

Are these run on the GPU or on the Neural Engine? That is, with the two variants of M4 Pro Mac minis with different numbers of GPU cores but the same number of Neural Engine cores, does the number of GPU cores make a difference?

1 reply

Conrad_C
Community Expert
Community Expert
July 14, 2025

Lots of good questions. Although the answers are in other threads here, they’re kind of spread out and not easy to find so here’s a summary.

 

Denoise

 

Yes, for Denoise at least, on a few non-Adobe photography forums the members have run performance tests on many different Macs and PCs using the same raw file. (Here is an example from 2 years ago.) The general findings are that Denoise performance is tied tightly to GPU performance and the number of megapixels in the image. The fastest Mac and PC GPUs can denoise a 60MP image in around 10–15 seconds. On Apple Silicon Macs, Denoise performance scales with the number of GPU cores, so you can estimate that a Mac with 32 GPU cores is going to run Denoise roughly twice as fast as a Mac with 16 GPU cores. You can also roughly estimate that a 30MP image would be denoised twice as quickly as a 60MP image.

 

For Denoise, no other component makes much difference as the GPU. A faster CPU, more CPU cores, more Unified Memory, and more/faster storage don’t affect Denoise speed very much.

 

Apple Neural Engine

 

Adobe did use the Apple Neural Engine NPU for Denoise for a while, and it cut the time even further. But the GPU remained the major factor, and some users figured this out by comparing the results from the low to high end Apple Silicon processors. For Denoise, Mac GPUs with the most GPU cores (the Max and Ultra level) benefited from the Apple Neural Engine the least, those with the least GPU cores got the biggest boost from the ANE. Unfortunately, reports started coming in of unwanted artifacts being produced when the ANE was used for Denoise, so a while back Adobe stopped using the ANE for Denoise because of those bugs. We don’t know what the status of that is, but those ANE bugs affected features in other AI-based non-Adobe apps too.

 

I don’t know of any other Lightroom features or even any other Adobe Mac apps that use the Apple Neural Engine. If something else does use it, hopefully Adobe will respond to this thread. The similar NPU on Windows PCs is also not really used by Adobe apps at this time, as far as I know, except for one feature.

 

Adaptive profiles

 

I don’t know if anyone’s specifically tested Adaptive profile performance across Macs. But we can make some educated guesses based on the fact that Adaptive profiles are based on machine learning/AI. That class of features, which includes Denoise, depends heavily on the GPU for performance, so again, it’s reasonable to guess that a Mac with a lot of GPU cores will apply Adaptive profiles noticeably faster than, for example, a MacBook Air with its mere 8-core GPU. In other words, when thinking about performance, Adaptive profiles can probably be lumped in with other machine learning/AI features such as AI masks and reflection removal that also depend on the GPU.

 

Single-core and multi-core CPU performance, the amount of Unified Memory (RAM), and storage performance benefit other features such as preview generation and panorama merging. But the GPU is the most important for many Develop module features, any GPU-accelerated features (such as Export), and especially any features that use machine learning/AI.

 

At least for me, the lesson has been that the GPU is a lot more important today than it used to be. In the past it was OK if I didn’t have a top-end GPU for photography, the photo apps ran just fine at the Pro level of Apple Silicon. But because machine learning/AI features are becoming a lot more important, I might increase the budget for my next Mac to get the number of GPU cores at the Max or Ultra level.