Exit
  • Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
  • 한국 커뮤니티
0

GPU Encoding Performance Worth It?

Explorer ,
Apr 18, 2020 Apr 18, 2020

My Win10 system processor (i5-9400F) does not include a GPU so when I am creating a video (H.264), I can only use software encoding and typically takes about an hour. I am trying to determine if replacing my current processor with one that includes the GPU (i.e., i5-9400), would that save me any appreciable time in creating videos.

 

TIA

-L

TOPICS
Hardware or GPU
4.7K
Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Apr 18, 2020 Apr 18, 2020

Short answer: No.

 

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Apr 18, 2020 Apr 18, 2020

I've moved this to the Video hardware forum.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Apr 18, 2020 Apr 18, 2020
Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Apr 18, 2020 Apr 18, 2020

Thanks for the link and response, but there seems to be no definitive answer in that article as to what parts of the H.264 encoding are accelerated by the GPU (maybe this is a moving target relative to features and updates in the application) or how much (or little) performance is impacted by the GPU.

 

If there is no discenable performance improvement with the GPU, then why have this ability at all in Premiere?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 18, 2020 Apr 18, 2020

I can see why there is little real-world performance improvement with and without QuickSync at equal export bitrates: The graphics processing performance of Intel CPU IGPs is, honestly speaking, lousy to begin with.

 

On the other hand, had Adobe not added QuickSync support, then newer versions of Premiere Pro would have required a PC that's based on an HEDT platform with quad-channel RAM and 16 or more CPU physical cores just to even run AT ALL. That would have cut off a very sizable percentage of professionals. And your i5-9400F would not have been able to even launch Premiere Pro at all due to the system falling well below minimum CPU performance requirements.

 

A beta release of Premiere Pro (available to Creative Cloud subscribers under the "Beta Apps" section of the Creative Cloud desktop app) will add Nvidia's NVENC and AMD's VCE hardware encoding support (in addition to Intel's QuickSync). The NVENC, in particular, will significantly speed up H.264 (AVC) and H.265 (HEVC) encodes/exports - but it will require a higher-end GPU in order for the feature to work well. Forget about low-end GT-series GeForce GPUs as they do not support NVENC at all; a GTX GPU is required for that.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Apr 21, 2020 Apr 21, 2020

I have a GTX 116(1660?) Ti so I'll need to see if it supports NVENC. Is there any guesstimate on when this will move oout of beta? Will this even have any noticeable effect on small projects like I have? (2-4 video and audio tracks, about an hour in total video duration)

 

-L

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 21, 2020 Apr 21, 2020

Yes. That GTX 1660 should support NVENC. In fact, it has the more desirable Turing encoder rather than the older Volta NVENC encoder that was in the non-SUPER GTX 1650 (the GTX 1650 SUPER also has the Turing encoder as it is based on the same TU116 GPU as the GTX 1660, as opposed to the TU117 used in the non-SUPER 1650).

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 22, 2020 Apr 22, 2020

Also, that GTX 1660 Ti NVENC should make a noticeably bigger difference (speed-wise) than Intel's QuickSync in the encoding of H.264 and H.265 video exports. And that's because the integrated Intel UHD Graphics' processing performance is, relatively speaking, lousy to begin with.

 

And when the final release that officially debuts NVENC and VCE hardware encoding support is released (who knows when), you cannot use both QuickSync and NVENC or VCE simultaneously for encoding. One or the other must be used, and if QuickSync and a supported discrete GPU are both present in your PC, NVENC or VCE will automatically take over for QuickSync, with absolutely no option at all whatsoever to fall back to QuickSync (that is, the only fallback option will be software encoding).

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Apr 22, 2020 Apr 22, 2020

So if NVENC becomes a reality and is stable, I won't need to worry about QuickSync, so I might as well hang with my current processor and see. Thanks!

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Apr 24, 2020 Apr 24, 2020

I've just done the Puget test using Premiere 14.2 Beta both with and without Quicsync enabled:

Nvidia Only

Overall 755

Export 82.4

Live Playback 68.6

GPU 59.8

 

Nvidia and QuickSync

Overall 788

Export 85.4

Live Playback 72.1

GPU 59.7

 

As can be seen there seems to be an advantage to have QuickSync as well as Nvidia especially for Live Playback.

 

i9-9900K OC to 5GHz all cores

Gigabyte Z390

RTX 2070 Super

64 Gb Ram

Windows 10 

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Apr 24, 2020 Apr 24, 2020

Thanks for the data! Does the 14.2 beta include the NVENC support? I am most interested in the export performance and ran several tests for via the Premiere/Media Encoder queue to see if there are any differences in encoding using hardware accel and system memory. Here are my results:

Adobe Media Encoder 2020, Version 14.1, Build 155
Video Card: NVIDIA GeForce GTX 1660Ti
Processor: Intel i5-9400F at 2.90GHz

 

<><> 8GB System Memory, Rederer: Mercury Playback Engine GPU Acceleration
04/20/2020 09:18:18 AM : Queue Started
- Source File: C:\Users\larry\AppData\Local\Temp\20200419_Worship.prproj
- Output File: C:\Users\Public\Videos\20200419_Worship_Test1.mp4
- Preset Used: High Quality 1080p HD
- Video: 1920x1080 (1.0), 29.97 fps, Progressive, Software Encoding, 00:59:35:22
- Audio: AAC, 320 kbps, 48 kHz, Stereo
- Bitrate: VBR, 1 pass, Target 20.00 Mbps, Max 24.00 Mbps
- Encoding Time: 00:46:36
04/20/2020 10:04:55 AM : File Successfully Encoded

 

<><> 8GB System Memory, Rederer: Mercury Playback Engine Software Only
04/20/2020 10:17:16 AM : Queue Started
- Source File: C:\Users\larry\AppData\Local\Temp\20200419_Worship_1.prproj
- Output File: C:\Users\Public\Videos\20200419_Worship_Test2.mp4
- Preset Used: High Quality 1080p HD
- Video: 1920x1080 (1.0), 29.97 fps, Progressive, Software Encoding, 00:59:35:22
- Audio: AAC, 320 kbps, 48 kHz, Stereo
- Bitrate: VBR, 1 pass, Target 20.00 Mbps, Max 24.00 Mbps
- Encoding Time: 00:48:12
04/20/2020 11:05:29 AM : File Successfully Encoded

 

<><> 32GB System Memory, Rederer: Mercury Playback Engine GPU Acceleration
04/20/2020 11:33:19 AM : Queue Started
- Source File: C:\Users\larry\AppData\Local\Temp\20200419_Worship_2.prproj
- Output File: C:\Users\Public\Videos\20200419_Worship_Test3.mp4
- Preset Used: High Quality 1080p HD
- Video: 1920x1080 (1.0), 29.97 fps, Progressive, Software Encoding, 00:59:35:22
- Audio: AAC, 320 kbps, 48 kHz, Stereo
- Bitrate: VBR, 1 pass, Target 20.00 Mbps, Max 24.00 Mbps
- Encoding Time: 00:40:19
04/20/2020 12:13:38 PM : File Successfully Encoded

 

<><> 32GB System Memory, Rederer: Mercury Playback Engine Software Only
04/20/2020 12:30:18 PM : Queue Started
- Source File: C:\Users\larry\AppData\Local\Temp\20200419_Worship_3.prproj
- Output File: C:\Users\Public\Videos\20200419_Worship_Test4.mp4
- Preset Used: High Quality 1080p HD
- Video: 1920x1080 (1.0), 29.97 fps, Progressive, Software Encoding, 00:59:35:22
- Audio: AAC, 320 kbps, 48 kHz, Stereo
- Bitrate: VBR, 1 pass, Target 20.00 Mbps, Max 24.00 Mbps
- Encoding Time: 00:45:01
04/20/2020 01:15:20 PM : File Successfully Encoded

 

<><> 32GB System Memory, Rederer: Mercury Playback Engine GPU Acceleration, Output Preview OFF
04/20/2020 01:19:56 PM : Queue Started
- Source File: C:\Users\larry\AppData\Local\Temp\20200419_Worship_4.prproj
- Output File: C:\Users\Public\Videos\20200419_Worship_Test5.mp4
- Preset Used: High Quality 1080p HD
- Video: 1920x1080 (1.0), 29.97 fps, Progressive, Software Encoding, 00:59:35:22
- Audio: AAC, 320 kbps, 48 kHz, Stereo
- Bitrate: VBR, 1 pass, Target 20.00 Mbps, Max 24.00 Mbps
- Encoding Time: 00:41:06
04/20/2020 02:01:03 PM : File Successfully Encoded

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 24, 2020 Apr 24, 2020

Richard, thanks for the added info about your system. With only NVENC enabled, your overall score barely beat mine, which averaged:

 

Overall 725

Export 79.3

Live Playback 65.7

GPU 54

 

And that's in LARGE part due to the fact that my CPU is completely stock (averaging just above 4.0 GHz) while your CPU is overclocked to such a high degree. With both CPUs at "stock" (actually, default Turbo'd), the i9-9900K system will actually underperform the R7 3800X system slightly.

 

And as I noted, the QuickSync feature actually decodes the H.264 timeline even if it actually isn't used in the encode. That boosts both the live playback and export scores (when both GPUs in the Intel setup are enabled) a bit.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 27, 2020 Apr 27, 2020

I have just finished not just the "Standard," but also the "Extended" PugetBench testing with the 14.2 beta.

 

Here are my results (the first 14.2 beta to complete and submit the "extended" results):

Extended Overall 797

Extended Export 97.1

Extended Live Playback 62.2

Standard Overall 713

Standard Export 77.7

Standard Live Playback 64.9

GPU 54

 

My system is made up of the following:

R7 3800X at default Turbo (averaging just above 4.0 GHz on all cores)

Asus PRIME X570-P

RTX 2060 Super

32 GB Ram

Windows 10 Pro

 

This is a fairly tough bar for a modestly-priced (by pro video editors' standards) PC to reach. An i5-9400F with a GTX 1660 Ti should score (in the Standard test) about in the 570s, with the Extended score about 630-ish (both scores with NVENC enabled in the 14.2 beta).

 

EDIT: I have just found the first "standard" result from Intel's "new" i9-10900K with both QuickSync enabled and a GeForce RTX 2080, running Premiere Pro 14.2 beta. The initial result was somewhat disappointing, in large part due to the Z490 chipset being in a preproduction state. The Export score falls below that of even my R7 3800X, let alone Richard's i9-9900K.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Apr 28, 2020 Apr 28, 2020

What would be interesting to me (so I can get a feel for what these scores may actually mean) would be the results of your tests running on 14.1 (no NVENC). I am assuming that the ratio of the results (14.1/no NVENC to 14.2/with NVENC) would give me a ball park performance increase %...

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 28, 2020 Apr 28, 2020

As I stated a couple of times, the PugetBench will not run at all on 14.1 no matter what I did. There is a huge compatibility problem between that particular version of Premiere Pro and the external program that Puget Systems uses to automate the mouse points and clicks. The program caused 14.1 to constantly crash upon the very start of the benchmark run. I had to downgrade to 14.0.4 to run the tests without NVENC.

 

With that version, the best overall scores that I had achieved with 14.0.4 were 637 Extended and 611 Standard with only software encoding. Compare that to the 797 and 713, respectively, with 14.2 and NVENC.

 

And most of that increase comes from exporting to H.264. There is little difference in timeline playback or GPU performance, with or without NVENC.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Apr 28, 2020 Apr 28, 2020

Thanks! Hopefully, 14.2 will hit production soon...

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Apr 29, 2020 Apr 29, 2020

For anyone interested I have a spreadsheet of all my results that I update whenever there is a change to any my system.

Here is a link:

 

https://shared-assets.adobe.com/link/00fabe23-3830-4d79-6415-2119cb621164

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Apr 29, 2020 Apr 29, 2020

I was not able to view the spreadsheet from your link, and the download did not result in an excel spreadsheet....

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
May 01, 2020 May 01, 2020

Sorry, my fault, I copied a link instead of the file.

This should work:

https://shared-assets.adobe.com/link/52607d75-0fe3-42ad-7dbe-d36571b21f44

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
May 03, 2020 May 03, 2020

The beta has since moved up to 14.3. In addition to your Extended and Standard results with that version (whose scores are as expected given the overclocked CPU), I have seen two standard results with AMD CPUs (the 678 from a Ryzen 5 3600 and a 766 from a Ryzen 7 3700X). But the Ryzen systems running 14.3 needed the top-of-the-line RTX 2080 Ti just to achieve those results. That, IMHO, is the type of component balance that I wanted to stay away from - in the opposite direction: Simply put, the combination of the cheaper AMD CPU plus the super-expensive Nvidia GPU would have put the total system cost well above that of an Intel system with a more expensive CPU but a lesser GPU that delivers essentially the same level of overall performance. In fact, the R5 3600/RTX 2080 Ti combo is simply a poor value overall, much more expensive overall than even an Intel i9-10920X/RTX 2070 SUPER combo that outperforms it.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Apr 30, 2020 Apr 30, 2020

I have just seen the two most recent results from your PC and the 14.2 beta with both QuickSync and NVENC enabled. The first one, as predicted, slightly underperformed my stock-speed AMD system with the slightly lesser GPU - but that was only because it was at stock default clocks (and even at that, it was an estimated 4.7 GHz on all cores) when tested. Your second set of results were as overclocked to 5.0 GHz on all cores.

 

What this means is that due to Intel's less-efficient architecture, one would have to overclock the 9th-Generation i9 to over 4.8 GHz on all cores (something that I never had success with when I was running a quad-core Haswell CPU) just to surpass a stock-speed AMD Ryzen 7 3###-series CPU.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
May 05, 2020 May 05, 2020

Going back to that original question:

 

Yes, it would make some difference. However, in your case, it is not big enough to justify the cost and trouble of replacing the CPU. Only an upgrade to a much more powerful CPU, such as an i9-9900K, would be worthwhile. That alone would make a much bigger difference than just trading in an i5-9400F with a non-F i5-9400. Plus, you would actually lose money in trading in that 9400F due to depreciation in value (that is, the resale value of that i5-9400F is much lower than its original purchase price).

 

In other words, while the i9-9900K is one of the most expensive CPUs for that mainstream Intel CPU platform, it is definitely well worth its current street price. The plain i5-9400, on the other hand, despite adding QuickSync and nominally retails for the exact same price as the i5-9400F, is definitely not worth the upgrade cost as you'd be paying extra money out of your own pocket just to add the performance-boosting feature that should have been free but provides only a sideways-grade in performance on everything else.

 

And speaking of that i5-9400F, why did you choose it in the first place when you or someone built that system? I could see why: At the time that the system was first built, the i5-9400F cost significantly less than the non-F i5-9400 (in particular, the i5-9400F sold for its originally intended price while the plain i5-9400 at the time sold for significantly higher than its original full price) due to the production yield shortage of good Coffee Lake parts, so instead of discarding the existing faulty production, Intel simply disabled the integrated Intel UHD Graphics on the faulty parts and resold them with the "F" designation at the end of the model number (and that is where I now know what the "F" designation stood for). Recently I saw a benchmark result in the PugetSystems benchmark database that indicates that the overall performance score from the i5-9400F is no better than that from a 3-year-old multiplier-locked (non-K) Kaby Lake quad-core i7 CPU. If I were you, I would have configured that system with an i5-9600K (not the KF) instead of that i5-9400F right off the bat: The cost difference between those two CPUs, at current street prices, is much smaller than the performance differences would have implied.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
May 05, 2020 May 05, 2020

To be honest, I did not realize that the processor did not have the GPU included. I needed an inexpensive system for use at work and found this HP Gaming system on Amazon for $700 that had better specs than my current system and also included the nVidia GTX video card ($300 card...)

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
May 05, 2020 May 05, 2020
LATEST

The trouble is, that HP system is not much, if any, improvement over your current system. That's because all of HP's gaming desktops do not have video outs at all on the motherboard, and there is absolutely no provision at all whatsoever to enable the integrated Intel graphics even if the CPU itself has it. What's more, HP's more home-office-oriented desktops have motherboard video out ports that get semi-permanently-disabled whenever a discrete GPU is installed, with absolutely no provision whatsoever to force-enable the iGPU. In other words, the way HP designed its gaming PCs at both hardware and firmware level that they might as well have used only the F-series CPUs in them.

 

And what HP did to all of its gaming desktops is EXACTLY the same thing that Gigabyte did to all of its X-series motherboards during the Z68 chipset days of Sandy Bridge: None of Gigabyte's X-series Z68 boards had any video outs at all on the motherboard panel, and thus the IGP (and therefore QuickSync) had been semi-permanently disabled on all CPUs that were installed on such motherboards, effectively making them nothing more than glorified P67 chipset-based motherboards. What Gigabyte failed to foresee, however, was that gaming systems with higher-end motherboards often made better video editing systems than cheapo systems with budget motherboards did and that the IGP could further accelerate certain video editing functions, so Gigabyte at the time relegated IGP/QuickSync/video-out capability to its lower-end Z68 motherboards. Gigabyte later rectified that situation partially with the introduction of the XP-series line of Z68 boards - but one would have needed an HDMI-equipped monitor just to even utilize the CPU's IGP for display (although these XP boards' BIOSes included a provision to force-enable the IGP whenever needed). At the time, most consumer computer monitors that were in use only had VGA or DVI-D capability, and thus could not connect to these XP-series boards at all. There is, however, one thing that Gigabyte did right when it introduced those XP-series Z68 motherboards: It correctly foresaw the time when HDMI and/or DisplayPort would become the new standard for PC monitor connections. Accordingly, none of today's higher-end discrete graphics cards come with VGA or DVI ports at all whatsoever, and using such an old or legacy monitor would now require an active whatever-to-DisplayPort converter just to even use such a monitor at all.

 

Another misstep that Gigabyte made during the LGA 1155 days is that it stuck to a legacy Phoenix/Award BIOS for a couple of years after most of its major motherboard competitors had adopted a UEFI BIOS from AMI (American Megatrends), which was then customized to the specifications of a given motherboard manufacturer. Gigabyte did implement a proprietary TouchBios - but that software only worked in Windows Vista or Windows 7, and it only worked with a legacy Award BIOS. Some of these originally Award BIOS-equipped Gigabyte motherboards got beta UEFI AMI BIOSes, but Gigabyte offered no official method of reverting back to Award if something went wrong with the BIOS update process or if the updated UEFI BIOS failed to work well. It wasn't until the Z77 era when Gigabyte finally adopted UEFI as standard.

 

And I cannot find the exact HP gaming system that you mentioned from Amazon. I checked all of the options priced between $600 and $800, only to find that every single one of those systems use older-generation CPUs (8th-Generation or even 7th-Generation Intel Core) and older-generation GPUs (Pascal or even Maxwell). In fact, all of HP's gaming systems on Amazon that have exactly the same specs as your current system all cost well over $900.

 

In other words, all HP gaming systems priced around the $700 price point that are currently available are a bit outdated: While the GeForce 10 series and earlier GPUs do beat their AMD Radeon contemporaries at 1080p video processing, they choke badly under pressure at 4k, to the point that same-age Radeons equal or beat those older GeForces at that resolution. And there is where the driver priorities diverge for both Nvidia and AMD: Whereas Nvidia has optimized newer drivers for current-gen GPUs but more or less neglect older ones, AMD kept on tweaking its newer drivers for older GPUs but leaves too many bugs unresolved when such drivers are used with newer parts.

 

In other words, a current-generation 6-core HP gaming system that performs no better overall than a four-year-old quad-core Skylake i7 CPU-based PC is a poor buy at that well-over-$900 price tag.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines