Copy link to clipboard
Copied
Specs:
Windows 10 Pro
Ryzen 3900x
GTX 1080 Ti
32 GB RAM
I'm currently only seeing about <20% CPU usage and <50% GPU usage and <50% Ram usage while encoding via CUDA Mercury Playback.
Any suggestions to help premiere pro/media encoder utilize my PC's full power?
Thanks,
Copy link to clipboard
Copied
Make sure the renderers on both premiere pro and media encoder are set to GPU acceleration,
consider as well deleting the media cache on both ME and PP if you are experiencing slow renders..
Take into consideration a graphics card update as well, using Nvidia's Studio Drivers not the gaming ones...
considering using your full system power, PP and ME will use GPU acceleration where required,
where you have used GPU Accelerated Effects like Lumetri and certain other effects, blending modes,
adjustment layers, etc... and will still use the CPU in most cases as much as it needs.. if you set your
renderers to Software Only, you will be forcing everything to your CPU...
It also depends on the format and container you are exporting to like H.264, Cineform etc...
You can check Memory inside Preferences, but I don't think it is a good idea to
increase RAM use by PP over other apps... in playback(preferences) you can also
uncheck enable mercury transmit if you don't use it, you can also tell media encoder
to pause tasks during premiere pro playbacks or no... try to test settings according to your preference..
Copy link to clipboard
Copied
A lot of people are having problems with the AMD CPUs and Nvidia GPUs. Some people have had better luck dissabling Nvenc.
Quick Sync and Nvenc do not have unlimited acceleration. I think the 12 core and 16 core AMD CPUs can computer faster than the Nvenc dedicated hardware if the AMD CPUs are as good as AMD users claim. That might be why we are seeing so many issues with AMD CPUs and Nvidia GPUs systems. Nvenc and Quick Sync will make a quad core CPU scream but might slow down a 16 core CPU. That is just a guess. I have also felt a motherboard BIOS update might help.