• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
2

Haswell E benchmarks

Guru ,
Sep 02, 2014 Sep 02, 2014

Copy link to clipboard

Copied

Hello all. It's is once again time for some benchmarks with the new release. I will also include the previous X79 4930K clocked at 4.4GHz and Dual Xeon 10 Core as reference to these. I will add them as I get them done. Please let me know if you have any questions.

5960X @ 3.5GHz

64GB DDR4 2400

1TB Crucial M550 SSD

780Ti

AVCHD 4 Layer 30 Minute test

Adobe CC2014

3 Layer - 14:35 (Match Source selected)

3 Layer - 28:57 (HDTV 1080P 23.976)

4 Layer – 16:01 (Match Source selected)

4 Layer - 31:37 (HDTV 1080P 23.976)

Red 4K to DPX 4096 x 2048 24p Full Range (max bit depth) 30 seconds of media

3 Layer - 2:08

4 layer - 2:08

Red 5K 98 Frame to DPX 5K 23.976 slow motion Frame Full Range (max bit depth) 30 seconds of media

1 Layer - 2:12

Red 6K to DPX 6K (max bit depth) 20 seconds of media

1 Layer - 1:31

Red 4K to H264 4K 30 seconds of media

4 layer - :50(Match Sequence H264)

DNG 2.4K to H264 2.4K 26 seconds of media

1 Layer - :15

AE CC 2014

Red 4K to AVI Lossless 4k 30 seconds of media

1 Layer: 2:19

 

5960X @ 4.5GHz

64GB DDR4 2400

1TB Crucial M550 SSD

780Ti

AVCHD 4 Layer 30 Minute test

Adobe CC2014

3 Layer - 11:36 (Match Source selected)

3 Layer - 22:54 (HDTV 1080P 23.976)

4 Layer – 12:48 (Match Source selected)

4 Layer - 24:58 (HDTV 1080P 23.976)

Red 4K to DPX 4096 x 2048 24p Full Range (max bit depth) 30 seconds of media

3 Layer - 1:54

4 layer - 1:58

Red 5K 98 Frame to DPX 5K 24 Frame slow motion Frame Full Range (max bit depth) 30 seconds of media

1 Layer - 1:58

Red 5K 98 Frame to DNxHD 1080 23.978 36 OP1A Frame 30 seconds of media

1 Layer - :12

Red 5K 98 Frame to DNxHD 440X 1080P 60 frame OP1A Frame 30 seconds of media

1 Layer - :14

Red 6K to DPX 6K (max bit depth) 20 seconds of media

1 Layer - 1:21

Red 4K to H264 4K 30 seconds of media

4 layer - :49(Match Sequence H264)

DNG 2.4K to H264 2.4K 26 seconds of media

1 Layer - :13

AE CC 2014

Red 4K to AVI Lossless 4k 30 seconds of media

1 Layer: 1:59

The playback and export performance currently with CC 2014 is now relatively consistent. The CPU threading was across all 16 threads both on playback and export now. The GPU load consistently pushed up to 90 to 98% when the benchmark tests included GPU accelerated plugins and scaling of multiple layers. The overall efficiency is far better which is why I didnt put notes after each test. The 8 Core clocked at both 3.5GHz and 4.5GHz played back 4K, 5K 98 frame (both 24 and 60 frame playback), and even 6K at full resolution playback without dropping frames. The 5K playback was smooth regardless of slow motion or full motion preview setup. The increased bandwidth and speed of the ram is definitely having an impact there. The ram usage was as high as 30GB in Premiere for the testing but AE went well over 46GB on export. GPU ram usage pushed 2.5GB on the 3GB card with 4K+ media in Premiere but normally used around 1GB for 1080. I also included some request DNxHD OP1A exports from 5K media as a comparison of media timeframe to encoding time for off line. I will be testing the 6 Core 5960K after I do some testing with the ram at stock 2133.                          

Eric

ADK

Views

13.4K

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guru ,
Oct 13, 2014 Oct 13, 2014

Copy link to clipboard

Copied

Yes the cuda cores and ram bandwidth will make a difference and the Titan Z would perform better. If it's cheaper than 2 980GTx cards then that is definitely a deal. If it's the normal price then I wouldn't get it.

Eric

ADK

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Oct 17, 2014 Oct 17, 2014

Copy link to clipboard

Copied

On the subject of x99 E WS:  the chipset supports memory density of 512G across the 8 slots.   Currently only 8 x 8 are enabled, for a total of 64.  Can upgrades to the current board -- say to 16 or 32 or even 64G per stick -- be made available by  firmware updates, or is some hardware change required?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guru ,
Oct 20, 2014 Oct 20, 2014

Copy link to clipboard

Copied

Yes provided the ram controller in the CPU supports the greater density of ram as well. That is also part of the equation now since Intel moved the ram controllers to the CPU's.


Eric

ADK

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Oct 20, 2014 Oct 20, 2014

Copy link to clipboard

Copied

Thanks Eric

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Oct 25, 2014 Oct 25, 2014

Copy link to clipboard

Copied

How useful is the "switching" feature of the x99 E-WS -- is it worth that $100 premium over the Deluxe?  Is it even a good thing at all?  Bill Gehrke writes "the ASUS X99-E WS with more lanes than normal has more latency than a dual Xeon because those lanes are created by adding third part PCI-e chips.  A small price to pay for a much reduced cost..." -- referring to a dual Xeon system, wouldn't we all like one of this.  Eric Bowen, I have the impression, is a little less enthusiastic.

From over there in Britain, system builder Adam Simmons writes (in a BlackMagic forum):  "I disagree about the WS board being the best for video editing. The board is being overloaded. All those PCI-e slots only have a finite amount of buslanes to push through and to manage that they use switchers, which are mostly fine for GPU's, but when trying to use the bandwidth for capturing can cause it to drop out. You're better off saving the money and buying a cheaper board that doesn't try to overload the bus lanes (as that seems to be the only real extra on the WS board compared to much cheaper boards)..."

As I understand it these two PLX chips is it?  offer real-time flexibility by reassigning lanes to the busiest devices (when the installed PCIe total exceeds 40 lanes) as opposed to these allocations being fixed when cards are installed.

I can imagine say 2 16x GPUs and a Decklink 4k Extreme (8x), which fills the bucket, plus maybe a hardware RAID controller (8x), and or later an NVME controller and who knows what else, I realize there are limits though I keep imagining jamming everything within reach into the PCIe region, with a mallet, if need be...

So the question: will E-WS help?  Can its evils be contained?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Oct 27, 2014 Oct 27, 2014

Copy link to clipboard

Copied

Anyone?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guru ,
Oct 27, 2014 Oct 27, 2014

Copy link to clipboard

Copied

Every single CPU board will have a more limited total throughput versus a dual socket board to push all of the PCI-E traffic through. However running the 8X cards you mentioned should be fine either on the WS or the Deluxe. The Deluxe board so far is more stable for clocking ram higher. Keep in mind the board dynamically assigns bandwidth away from onboard devices as well when using more bandwidth for slots. If those onboard devices are not utilized to their extent then you wont notice a performance drop at the PCI-E slots.


Eric

ADK

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Oct 27, 2014 Oct 27, 2014

Copy link to clipboard

Copied

Thanks Eric.  Is it possible to manually prioritize one or some pcie card(s) over others to protect against say dropouts during capture when there's potential for overload?  How important are those switchers?   Does the E WS bring anything unique and valuable?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guru ,
Oct 27, 2014 Oct 27, 2014

Copy link to clipboard

Copied

The bios would have to allow manual channel options to set the PCI-E lanes. So far Asus normally only allows that for slots limited to 8X electrical if it's an option. What switchers are you talking about? The WS just has more PCI-E slot channels on the board versus the normal 8 besides the channels off the CPU. Beyond that there is nothing special to the WS board that I have seen.

Eric

ADK

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Oct 27, 2014 Oct 27, 2014

Copy link to clipboard

Copied

"what switchers" -- that pair of PLX PEX xxxx chips, or something related, supposedly on board.

The workstation has seven slots as opposed to six on the Deluxe, and the ability to assign lanes according to the occasion, if I understand this right, and that's all I know, or imagine I know.    Is this a real-world plus?  Will this potential for overload mess up captures potentially, as system-builder Simmons claims? 

As far as the extra slot, I may need it for an Areca in addition to a Decklink and two doublewide GPUs down the road.  It the raid controller isn't needed there will be other applicants for its spot. 

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guru ,
Oct 27, 2014 Oct 27, 2014

Copy link to clipboard

Copied

The PLX chips are the PCi-E bridge chips that give extra PCI-E lanes on the board. I don't know if I would call them switchers.

The electrical assignment per slot in the WS is  automatic. I don't remember any settings in the bios that allows manual control other than PCI-E Gen 3,2, or 1.

If you plan on dual GPU's with a raid and Blackmagic card then yes you would need the WS board.

Eric

ADK

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Oct 27, 2014 Oct 27, 2014

Copy link to clipboard

Copied

Thanks Eric.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Nov 07, 2014 Nov 07, 2014

Copy link to clipboard

Copied

Our Haswell-E 5930K is NOT being used fully when exporting from Premiere latest version. Anyone knows why? Any tips?
Our older Haswell 4770K gets its cores at 100% and finishes faster... Why?

2014-11-07_1731.png

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Nov 07, 2014 Nov 07, 2014

Copy link to clipboard

Copied

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Nov 10, 2014 Nov 10, 2014

Copy link to clipboard

Copied

Bill Gehrke wrote:

Register and run our Premiere Pro BenchMark at PPBM7.com

Bill, that link is the CS6 Benchmark. Where is the latest stuff?@

And, not to sound unappreciative, but is Eric the only person benchmarking anything around here now? Wasn't there a whole crew of you at one point? What happened the PPBM7 and 8 sites and Tweakers page that I paid a subscription for? Why is there no information on it? Where's Harm?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Nov 10, 2014 Nov 10, 2014

Copy link to clipboard

Copied

That site http://ppbm7.com has one download that can be used to run benchmarks on CS6, CC v7 and CC v8.  Since you are registered you can see the results page and as of the last few days you can now not only see the versions but you can even sort on the versions.  (The first click on the version column it looks blank but click again and you will see the latest version results.)

Sorry but I cannot readily update the out of date info on the site.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Nov 22, 2014 Nov 22, 2014

Copy link to clipboard

Copied

Thanks for this great thread !!

I'd like to build a dedicated workstation for R3D 4K/5K/6K works

(Premiere, After Effects, Fusion, Resolve...) with inside :

- mobo / ASUS Rampage V (for stable overclocking)

- CPU / Intel i7-5960X (at 4,5GHz if possible !)

- cooling / Corsair H110 /or Nzxt Kraken x61 /or Noctua NH-D14 / or Noctua NH-D15

- power / Corsair AX1200i

- PCI 16x slot / Gigabyte GTX Titan Black Windforce (factory OC, with 3x fan cooling)

- PCI 8x slot / BMC Decklink Mini Monitor /or Decklink SDI 4K (for grading)

- PCI 8x slot / ThunderboltEX II (for Thunderbolt backup drives)

- PCI 8X slot / RAID card (for 3x dual Barracuda RAID0)

- OS drive (Windows 7) / M.2 drive (don't know yet which one to get)

- project drive / Samsung SSD 840 Pro 256GB

- scratch drive / Samsung SSD 840 Pro 512GB

- media drive / 6x Seagate Barracuda 7200 3TB

- case / Corsair Vengeance C70

Now, with all those informations, my question is about the RAM...

to get the best performances with Haswell-E, do we absolutely need a 64GB kit, or can we take 2x kit of 32GB ?

I read somewhere we can't absolutely be sure that 2x "identical" kit are really... identical !!

And for a better CPU overclocking, it's sometimes good to also overclock slightly the RAM, right ?

So, do I better get 64GB kit of Corsair Vengeance 2133 / 13-15-15-28 (1100€)

or 2x 32GB kit of G.Skill Ripjaws4 2400 / 15-15-15-35 (1000€) ?

And, last question, for video editing, is there a significant difference between 2400 and 2666 ?

(Because, for 1200€, I saw 2x 32GB kit of G.Skill Ripjaws4 2666 / 15-15-15-35...)

Thanks !

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guru ,
Nov 24, 2014 Nov 24, 2014

Copy link to clipboard

Copied

11/24/2014

Well essentially all your standard ram manufacturers bid on chips in the East based on market availability and pricing. Grade level is a major factor on both of those. Often times the Dram manufacturers change chip manufacturers from 1 production run to the next due to that fluctuation. So when you buy 2 kits they may or may not be the same chips or production. Also they may be different grade levels even if they are the same manufacturer depending on how the availability was at that time and the speed they are built for. I would not however suggest any sticks that are not CL 15. I would also suggest getting the Gskill versus the Corsair but I wont get into why in a public forum. Lets just say you want to avoid the timings that are different like those 13 15 15. 2666+ ram will be the higher grade ram. That is what I suggest and the Gskill has the same latency regardless of 2400 versus 2666. The performance is worth the price difference.


Eric
ADK

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Nov 25, 2014 Nov 25, 2014

Copy link to clipboard

Copied

Thanks Eric !

So, based on your recommendations (no multiple kits / no mixed timing), I only found those 64GB kits :

GEIL / EVO Potenza / 2133Mhz / 15-15-15-36 / 1.2V

GEIL / EVO Potenza / 2400Mhz / 15-15-15-35 / 1.2V

GEIL / EVO Potenza / 2666Mhz / 15-15-15-36 / 1.2V

GEIL / EVO Potenza / 2800Mhz / 16-16-16-36 / 1.2V

GEIL / EVO Potenza / 3000Mhz / 15-15-15-35 / 1.35V

GEIL / EVO Potenza / 3000Mhz / 16-16-16-36 / 1.35V

GEIL / EVO Potenza / 3200Mhz / 16-16-16-36 / 1.35V

But, I don't know if GEIL is a quality brand (in my case it's not on the QVL for the RAMPAGE V),

and for the others 64GB kits, there is only mixed timing, even for G.Skill :

CORSAIR / Dominator Platinum / 2400Mhz / 14-16-16-31 / 1.2V

CORSAIR / Dominator Platinum (with fan) / 2666Mhz / 15-17-17-35 / 1.2V

CORSAIR / Vengeance LPX / 2400Mhz / 14-16-16-31 / 1.2V

G.SKILL / Ripjaws 4 / 2800Mhz / 15-16-16-35-2N / 1.25V

So, if GEIL is not a good option, there will be no other way than to go with mixed timing kits...

But on the CORSAIR Vengeance LPX 2400Mhz, the base timing is 15-15-15-36 at 2133Mhz,

(and I think it's maybe the same with the 2800Mhz G.Skill, which is surely just an OC from 2666Mhz 15-15-15-35)

so don't you think it's possible to set the timing in a good way, even if it's lower the Mhz ??

If possible, could you tell us which kit of RAM did you used for your test ?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guru ,
Nov 25, 2014 Nov 25, 2014

Copy link to clipboard

Copied

Honestly the 2x 32GB Gskill kits have been working fine if you get them at the same time. The stock is going so quick I don't think shipments are getting mixed. I would suggest that over the other ram. Also don't get the ram where the first 3 timings are different. Get the ram where it's the same at CL 15.

Eric

ADK

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Dec 02, 2014 Dec 02, 2014

Copy link to clipboard

Copied

Thanks again Eric !

Finally - because the G.Skill kit of 4x8GB / 2666 is almost impossible to find in Europe -

I had no other choice than ordering 2x G.Skill kit of 4x8GB / 2400.....

but, I'll try to OC them to 2666, because if I understood what you told,

it will really increase the performance with the 5960X.

I have a last question about RAM + OS + video apps, on the specific case of those workstation dedicated for huge 4K footage.

When you have 64GB of RAM, do you really need 64GBx2,5 of swap-file to prevent Windows "blue screen of death" ??

For example, my workstation will have 120GB M.2 for OS and apps + 512GB SSD for scratch + 256GB SSD for projects files

+ 2x3TB RAID0 HDD7200 for R3D footage + 2x3TB RAID0 HDD7200 for 3D/VFX renders + 2x3TB RAID0 HDD7200 for final export...

But, with all this RAM and drives inside, does Premiere / After Effects / Resolve... really need 160GB more, just for swap files ???!!

In my case, with 20GB of Windows + 40GB of apps + 160GB of swap, I'll need to upgrade to a 250GB M.2 right now...

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guru ,
Dec 02, 2014 Dec 02, 2014

Copy link to clipboard

Copied

Well 2666 will have an impact on R3D 5K and 6K realtime more than most other workflows. Essentially only when you push the GPU acceleration hard will you see the advantage to the higher ram clock. 2400 is good enough though so I wouldn't press it. You just want higher than 2133 since the latency on DDR4 went to Cl15 versus CL11 for DDR3. The only way to make up for the latency difference is clock speed.

No you don't need 64GB for the swap file. You just need a swap file. Many applications still write directing to it rather than using ram. 16GB is normally more than enough. However until application designers quit writing directly to the swap file for lower priority processes you still have to maintain it or risk application errors.

Eric

ADK

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Dec 05, 2014 Dec 05, 2014

Copy link to clipboard

Copied

Just to clearly understand... in regular editing, with a 5960X / 64GB 2400,

and without using the CUDA (without any color corrections inside Premiere, in example),

the R3D 4K/5K/6K will be fully read by the CPU, without loosing any frames.

But, in R3D 5K/6K, if we are using the CUDA a lot, the RAM will need a bigger bandwidth

(given throught higher clock speed and lower latency)

to "send" those huge footage from the HDD to the GPU, as fast as possible ?

And, due to its "smaller" size, the R3D 4K won't (or just a less) be affected by this kind of limitation,

and the bandwidth of the 2400 will be good enough to "carry" all the 4K data to the GPU in almost realtime ?

So - if the previous "chapter" was right - is it possible to gain some bandwidth without using higher clock RAM,

but just in lowering the playback resolution of the R3D footage, and in example, do the color correction in half-rez ?

Or, it won't change anything, and no matter the rez of the playback, the CUDA absolutely need to work on the full size footage ?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Jan 22, 2015 Jan 22, 2015

Copy link to clipboard

Copied

Are you still getting better benchmarks from 780Ti than 980?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guru ,
Jan 23, 2015 Jan 23, 2015

Copy link to clipboard

Copied

The 980GTX has about the same performance as the 780Ti but the heat and power are less. The 980GTX has more load capability to use as well. I would not recommend a 780Ti at this point. Keep in mind the 900 series Titans are getting close to releasing.

Eric

ADK

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines