• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
0

Re: Speed of GPU THAT Important?

Contributor ,
Dec 25, 2018 Dec 25, 2018

Copy link to clipboard

Copied

I just built a new "general purpose" PC to replace my backup NLE and general purpose machine.

The new machine has a GTX1060 in it. The main machine has 1/4 as much RAM (32GB) as my main NLE, which is a dual Xeon E5-2667 w/128GB and a Titan X.

For three years, I have never gotten Premiere to play back smoothly on the dual Xeon. I've been back and forth with Supermicro tech support to tweak the X10DRi motherboard to eek out the last ounce of performance.

Fast forward to Dec 2018. I built a midrange system with i7-9770K, only 32GB RAM and a MSI GTX1060 that cost 1/5 of what the Titan X cost me. The lower end PC wipes the floor with with '$15,000 NLE I built in Aug 2015. Plays video effortlessly, no stutters, no dropped frames.

Now here's where some of my question comes from:

I use NVENC to export HEVC 4K video quite often. On my expensive NLE, it is barely realtime render speed, perhaps a little slower than realtime. GPU usage is pretty high during export.

On my new machine, I tested a 4K export, HEVC encoding, similar files and settings, and it's roughly 3X faster than realtime speed for the render time. GPU use about 85%.

Now here's what REALLY surprised me: VLC Media Player on the old machine cannot play the HEVC 4K footage. It freezes and maybe updates a distorted frame every few seconds. CPU usage 100%.

On the new PC, VLC plays smoothly. CPU usage is ONE PERCENT!! GPU, about 38% peak. This is a machine I built for $1500--ONE TENTH the build cost of my main NLE.

So here's the question:

Is the GTX1060 THAT much faster than the Titan X?

Or is the difference because my new PC is running Windows 10 Pro and the NLE is running Windows 7 Pro?

I am about to hit the buy button on a GTX1080ti (even faster than the 1060) to replace my 3 year old Titan X on the expensive NLE.

To think that I suffered with almost unusable performance, with freezing playback and jumping from one frame to another every few seconds, making any kind of serious editing work impossible, when a cheap PC build runs Premiere so smoothly, well, I'm bummed out!

Views

652

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Dec 25, 2018 Dec 25, 2018

Copy link to clipboard

Copied

Branched to a new discussion.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Dec 25, 2018 Dec 25, 2018

Copy link to clipboard

Copied

Dual Xeon E5-2667 is not recommended for Premiere pro:

Re: Workstation that will handle 4K

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Dec 25, 2018 Dec 25, 2018

Copy link to clipboard

Copied

I chose the dual Xeon because Puget Systems had that as their $30,000 flagship Adobe Premiere NLE workstation.

To think I could have built ten workstations with better performance, if I had waited another 3 years...

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Dec 25, 2018 Dec 25, 2018

Copy link to clipboard

Copied

Actually, your old Titan X was of the older Maxwell generation - a GPU generation that suffered from memory downclocking issues when running CUDA (GPGPU) applications. In this case, the memory speed was capped at 1.5 GHz even if the memory was rated to run much faster than that. That effectively reduced the available memory bandwidth of that old Titan X to 288 GB/s.

The GTX 1060, on the other hand, is of a newer architecture (Pascal) that makes better use of its available memory bandwidth in GPGPU applications. In your case, the GTX 1060 actually runs its memory at 2.15 GHz in CUDA mode, resulting in an effective available memory bandwidth of 206.4 GB/s. Much closer to one another than you previously thought, when comparing your old GPU and your newer GPU.

But before you waste more money on a new GPU for your older dual Xeon workstation, think about the CPU architecture of that old workstation itself. It is of the older, less efficient Haswell-EP architecture, and a dual CPU setup requires quite complicated switching that it actually suffers from much greater latency than any single-CPU setup. And this is despite those dual Xeon E5-2667 v3 CPUs having a total of 16 cores and 32 threads running at an effective Turbo-boosted clock speed of 3.4 GHz, whereas your new 8-core, 8-thread i7-9700K desktop (there is no i7-9770K at all whatsoever) actually runs at 4.6 GHz in all-core Turbo-boosted mode. And this, combined with the fact that the performance of Adobe's Creative Cloud programs reaches the point of diminishing returns at around the 8- to 10-total-core point, makes that old workstation a colossal waste of money for Adobe these days.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Dec 26, 2018 Dec 26, 2018

Copy link to clipboard

Copied

I'm still running CS6 here, as the main NLE has no internet connection whatsoever. So I'm basing all performance on CS6 behavior.

The NVENC encoding uses nVidia GPU mainly, so the main CPU is mostly uninvolved with the process. It appears that the GTX1060 can encode HEVC almost 3x faster than Titan X.

As for the playback performance, I also have DaVinci Resolve 15 running on that box and it plays 4K video effortlessly. In fact, I experimented to see where the breaking point was. I had FIVE 4K streams in one layered timeline, one full frame, the other four overlaid in smaller windows and rotated at random values, on top of the main video, each with a LUT and with sharpening filters applied. The main CPU was running at 50% utilization, and the Titan X was running in the high 90% range. No frames were dropped. So that suggests the Dual Xeon is very capable as an editing platform. I would be on Resolve totally if it supported a few file formats, but it's limited to cinema formats.

I checked the box the CPU came in.. you're right. It's 9700K. Old age, failing eyes, failing memory to blame.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Dec 26, 2018 Dec 26, 2018

Copy link to clipboard

Copied

In shopping for a new set of system hardware, I'm seeing the

Intel Xeon W-3175X

as being touted for "content creators" and high end workstations. Much like my E5 series was 3 years ago.

But I am also considering i9-9900K for its higher clock.

It's been a running 3 year mystery why my dual Xeon E5-2667 can't play any video at any resolution or any CODEC at frame rates higher than 24 without dropping excessive frames. Recently, I was having trouble playing DV footage that I shot years ago on a Sony VX2000. That 60i material was dropping frames. Yet, I can play 4K uncompressed footage fairly smoothly at 24FPS. I've also noted that I can scroll through the timeline and 'fast play' footage by dragging the CTI around and it plays out effortlessly. But hit play and watch it drop a bunch of frames then stutter badly. Stop and play a few times and it will play smoothly from then on. It's like something is wrong with the pipeline.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Dec 27, 2018 Dec 27, 2018

Copy link to clipboard

Copied

LATEST

I'm finding it hard to believe that the Xeons are the bottleneck here.

Playing out some 4K 24P Prores footage. CPU utilization= 4%, GPU utilization = 71%.

I don't understand why it's dropping so many frames.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines