Copy link to clipboard
Copied
I am having a Windows PC custom built with the key app being LRC (subscription model). A very (very!) important consideration will be the performance of LR's AI functions, especially DeNoise.
I was going to go with a nVidia RTX 4070 Super but with the announcement of the 5000 series I see that the 5070 (non Ti) is about the same price so I compared them (note that the 5070 Ti is out of my price range).
Both are 12GB RAM. When I look at the specs, pros and cons of the 5070 are:
Pros: faster clock speed, PCIe 5.0 (vs 4.0), GDDR7 RAM (vs GDDR6x), higher memory bandwidth (about 1/3 faster)
Cons: less CUDA cores, less tensor cores, less ray tracing cores (doesn't matter for LR though?), less TMUs (doesn't matter for LR though?), higher power consumption
My understanding is that CUDA cores and, especially, tensor cores are very important to LR AI operations. Would this outweigh the faster RAM and higher bandwidth?
BTW, a good comparison of the two can be found here:
https://technical.city/en/video/GeForce-RTX-4070-SUPER-vs-GeForce-RTX-5070
So CUDA and Tensor cores (4070 Super) over clock speed and pipeline speed (5070 non Ti)?
Thanks!
Copy link to clipboard
Copied
Can anybody tell me if I have perhaps posted this to the wrong area? I am wondering if I would get a response(s) if it was posted elsewhere within the forums.
Thanks
Copy link to clipboard
Copied
Anybody?
Copy link to clipboard
Copied
This is probably the best place to post that question, or at least I can’t think of another sub-community that would be more appropriate.
The problem with the delay in responses is that probably not a lot of users understand Denoise at that level. Which means that for an answer you can count on, you’ll probably be waiting for someone knowledgeable from Adobe to answer. Actual engineers are known to pop in from time to time and answer questions, but that isn’t guaranteed because technically this is a user forum, not a direct line to engineering.
As an example of the limited detail users might know about this, I’m about to tell you everything I know about Denoise and GPUs.
On PCs, people have the most luck with the latest greatest Nvidia graphics cards, like the 40x0 series. But you already know that. I think Tensor Cores are the most important component for Denoise but someone else will have to confirm that. I also don’t know if Lightroom Classic can use any of the NPUs available for PCs.
On Macs, what matters most is the number of GPU cores because Denoise performance scales linearly with those; doubling the GPU cores usually halves the Denoise time. If Adobe can re-enable support for the Apple Neural Engine NPU, Denoise time should drop by an additional amount as it used to before Adobe disabled support for it due to issues.
On both, GPU memory is not really an enabler, just a potential bottleneck if there isn’t enough. In other words, once you have enough graphics memory to do Denoise properly, adding more won’t speed it up much further. How much is a good amount? I’m not sure but I’ll guess 6–8GB and after that is diminishing returns.
Copy link to clipboard
Copied
thanks for the reply. I do realize it is a tough question but it is an important question for me so I was hoping for a miracle I guess - a non Adobe person knowing the answer to this question. I did try chat, but the Adobe person just kept giving me info that either was the system minimums or info that didn't even apply at all.
One would think that Adobe would give more info on this given that they are moving more and more into the AI realm.
Anyhow, thanks again.
Get ready! An upgraded Adobe Community experience is coming in January.
Learn more