What GPU features are most important to LR DeNoise?
I am having a Windows PC custom built with the key app being LRC (subscription model). A very (very!) important consideration will be the performance of LR's AI functions, especially DeNoise.
I was going to go with a nVidia RTX 4070 Super but with the announcement of the 5000 series I see that the 5070 (non Ti) is about the same price so I compared them (note that the 5070 Ti is out of my price range).
Both are 12GB RAM. When I look at the specs, pros and cons of the 5070 are:
Pros: faster clock speed, PCIe 5.0 (vs 4.0), GDDR7 RAM (vs GDDR6x), higher memory bandwidth (about 1/3 faster)
Cons: less CUDA cores, less tensor cores, less ray tracing cores (doesn't matter for LR though?), less TMUs (doesn't matter for LR though?), higher power consumption
My understanding is that CUDA cores and, especially, tensor cores are very important to LR AI operations. Would this outweigh the faster RAM and higher bandwidth?
BTW, a good comparison of the two can be found here:
https://technical.city/en/video/GeForce-RTX-4070-SUPER-vs-GeForce-RTX-5070
So CUDA and Tensor cores (4070 Super) over clock speed and pipeline speed (5070 non Ti)?
Thanks!
