Copy link to clipboard
Copied
Good evening,
I am planning to buy a new GTX1080 or a GTX 1070 for 3D rendering, Pr Pro, AE and all CS Apps. I read that the new architecture Pascal are not supported yet by Adobe. Is it true?
Thank you for your time.
Copy link to clipboard
Copied
The GTX 1070 and 1080 at the moment are having problems with the preview/video engine playback. If you choose the accelerated cuda GPU as your playback device the preview screens (all of them) will go black and you wont see anything at all, only hear the audio. People are saying this is a driver issue so we just have to wait and until then use the software playback.
I have the MSI GTX 1080 gaming X and this is the exact problem im having.
EDIT: I just tried to render/export a video with my GPU and i cant even do that, gave me an error. I have to use the software render which takes literally forever...
Guess i have to go back to sony vegas until this is fixed...
Copy link to clipboard
Copied
Thank you for your reply. Is sad, I can understand that new nardware need to be optimize, but we are in 2016 and Nvidia and Adobe should work together to solve compatibility issues.
Copy link to clipboard
Copied
The problems are that Adobe has not updated its Mercury Playback Engine in well over a year, and that Sony now no longer owns the Vegas software line (Vegas is now owned by MAGIX). Also, with Vegas and that GTX 1070 you're pretty much stuck with CPU-only (or "software-only") rendering because Vegas only supports CUDA with the ancient Tesla- and Fermi-architecture GPUs and nVidia GeForce GPUs in general perform much worse than AMD Radeon GPUs when both are running in OpenCL mode.
Copy link to clipboard
Copied
I fixed this issue several days ago, i started with a windows reinstall and installed the older 368.25 driver and everything worked perfectly. Then i go to nvidia reddit and see a new driver (368.81) and install it. Everything now works perfectly, i dont think it was the windows reinstall that fixed it because that windows install was relatively new anyways so it had to be the 368.69 driver that messed things up.
Copy link to clipboard
Copied
Thank you so much!
Now I can upgrade to a new workstation.
Copy link to clipboard
Copied
To all!
I just got my first Premiere Pro BenchMark (PPBM) submission of results that uses a GTX 1080. In PPBM we have a exaggerated timeline that uses lots of GPU MPE accelerated effects and features, If this is run once with GPU acceleration and a second time with only the CPU (no GPU acceleration) you get two results which show the gain from the GPU acceleration. The best I have been able to do with a single GPU on my OC i7-5960X system is ~15. Now I do not have anything faster than a GTX 970 and/or a GTX 780 Ti. These new results show a single GTX 1080 with an amazing gain of 25.3 using Windows 10 with a 368.81 driver and Premiere from CC2015.3.
I have been able to get a gain of 25 by using 2 GPU's, and by going to three GPU's getting a gain of 30.
I just got word from Spencer that he is successfully editing RED Epic Dragon on this 4.4GHz overclocked i7-3930K with 32 GB of RAM
Harm built an New Monster just before he died and the gain on it with a GTX 980 Ti was 17.8.
Copy link to clipboard
Copied
Thanks Bill! I upgraded from the GTX 760 and it's been great so far. My gain with the 760 was 11.3, so, a big difference and Premiere has not been crashing while editing RED footage now whereas it would tank every couple hours before using the 760.
Copy link to clipboard
Copied
When you say a gain of 15 or 25, are you talking about percent or times?
Copy link to clipboard
Copied
I am talk about times, for instance my over clocked GTX 780 Ti performs the test in 18 seconds and without GPU assistance it takes 275 seconds. Now remember this is a loaded benchmark to test capability and not what you would be seeing in a "typical" timeline
Copy link to clipboard
Copied
Holy cow! Before I get any of the GTX 1000 series GPUs, I would love to see the PPBM results for exports to H.264 Blu-ray from a timeline. If Pascal is a sizable improvement over even Maxwell for that test, then my main i7-4790K rig might justify a GTX 1060 or 1070 (but a 1080 would be clearly overkill for this hyperthreading-enabled quad-core CPU-based system).
Copy link to clipboard
Copied
Wouldn't the H.264 BR results be more indicative of CPU/RAM? From the PPBM Test Methodology page:
"H.264 test:
Here the speed of CPU/RAM communication is king. Number of cores, clock speed and the amount of CPU cache are very important. Dual processor systems are somewhat hampered by the 2 chip communication, but mostly by their clock speed."
Interested in your response because you guys know a lot more than I do about this stuff. Here were my results with the GTX 1080:
System:
3930K OC'd 4.4GHz
32GB DDR3 1600
P9X79
GTX 1080
OCZ Vertex 4 256GB System Drive
Kingston 120GB Cache Drive
3x1TB Samsung 850 EVO (Windows Software RAID0) Project Drive
Many Backup HDDs
Corsair H80
Copy link to clipboard
Copied
RjL190365 wrote:
Holy cow! Before I get any of the GTX 1000 series GPUs, I would love to see the PPBM results for exports to H.264 Blu-ray from a timeline. If Pascal is a sizable improvement over even Maxwell for that test, then my main i7-4790K rig might justify a GTX 1060 or 1070 (but a 1080 would be clearly overkill for this hyperthreading-enabled quad-core CPU-based system).
Well as you can see from above the h.264 Blu-ray export with the GTX 1080 was 80 seconds and I just dug up Spencer's earlier results where the only change was his use of a GTX 760. With the GTX 760 it took 121 seconds and with the GTX1080 it was 80 seconds. Maybe we can ask Spencer to run one more test for us which is not normally done in PPBM but will give Randall and others valuable information. Later I will pass the info to him on how to do it and retrieve the data on a CPU-only h.264 run so we can see the h.264 gain for both configurations
Copy link to clipboard
Copied
Yeah I'd love to help. Should I run the h264 BR test in software MPE-mode to negate the GPU?
Copy link to clipboard
Copied
I think that would be great. Just delete the current "H.264 test.m4v" file and turn off GPU acceleration and export that file. It probably will be around a 1000 seconds. I did it on my laptop and it took 1385 seconds and on my 8-core, 4.5 GHz i7-5960X it was 460 seconds. That gives this laptop with a GTX 765M GPU a H.264 Blu-ray gain of 1385/201 = a gain of 6.9. Also I sent you the updated Statistics file for Premiere 10.3.
This will give RJL and the forum the gain numbers with your GTX 1080 and your older GTX 760 for the exporting of the H.264 Blu-ray timeline. It will not be as dramatic as the MPEG2-DVD export as the H.264 Blu-ray export is much more CPU intensive
Copy link to clipboard
Copied
I agree that the H.264 export is more heavily CPU intensive. My results on both of my desktop PCs bear this out. On my main i7-4790K (overclocked to 4.7 GHz) PC with the GTX 970, my H.264 results were 884/95 for a GPU gain of 9.3. On my i3-6100 breadbox with the GTX 960, they were 2039/197 for a gain of 10.4.
Copy link to clipboard
Copied
h264 Test:
Render time was approx. 814 seconds*
GTX 1080 time was 80 seconds
814/80 = 10.2 GPU gain
*I looked at the file created and modified timestamps - It may be more accurate using the Statistics file, but I didn't have the other files rendered so the stats file fails. I know the stats file looks at created and modified timestamps as well
Copy link to clipboard
Copied
Just ordered my 1070 Gtx! I'm so excited to see the updates on this thread. I was planning on waiting a little bit longer due to these Premiere issues and the current over-priced climate but I saw a deal on Jet.com for $375, free shipping and no tax for an Asus ROG Strix 1070. It should come next week. I knew Pascal would bring a nice speed-up to Premiere. I can't wait for adobe to update MCE and get even more performance out of it down the line.
i7 6800k is my next purchase. I will post PPBM scores when I get it all together, thanks for the updates Bill!
Copy link to clipboard
Copied
Got GTX 1060 and same problem with the black screen in preview.
Found strange thing: GPU Core is 1595/1810 (regular/boost) by default. If I overclock boost to 2000MHz = black screen, and if downclock to 1960 preview is ok again.
Double checked, it really depends on this setting.
Also I don't see any rendering speed up in comparison with my old GTX 570... same slow, 4k playback also didn't get much better, may be little bit.
When rendering GPU load is 20-30%, how do I load it more?
Copy link to clipboard
Copied
If you want improved Premiere Pro GPU performance from an nVidia CUDA card you do not have to adjust the GPU Core speed. All you have to do is boost the Memory Clock speed. Here is my GTX 1060 SC overclocked to 2400 MHz. To get to this actuall speed use your Overclocking tool to set the Memory Clock to 2500MHz, for some reason nVidia down clocks it to 2400 MHz when any CUDA application is running. The Memory Clock is really the only help with CUDA applicsations. You can also see that the other settings Power, Fan Speed are still all very conservative even under high GPU loading.
Copy link to clipboard
Copied
that's my Premiere Cuda export...
and here is stupid Gigabyte utility
so this 1060 has 8000MHz memory clock or I don't understand what you're talking about
but in GPU-Z i see max about 2300MHz..
Copy link to clipboard
Copied
If you use a tool like Speccy you can see that there are 4 different levels of Memory Clock/Shader Clock speed. For some reason the drivers for CUDA applications are limited to Level 3 so you will not see Level 4 for Adobe software.
Now just to confuse you more you will also notice that there are 3 different ways of specifying the Memory Clock frequency. Some tools like GPU-Z use the true base frequency like below: Notice the Default Memory Clock on my card is 2002 MHz, but it was overclocked by me to 2502 MHz but to further complicate specifications if you look a the Sensors Tab you will see that rather than running at 2502 MHz it really is 2399.6 MHz because it is restricted to only run at Level 3 as you saw above. Now some people like the larger numbers so they use 2x the clock rate which apparently is the real Memory Clock. Notice below that GPU-Z shows the default Memory clock to be 2002 MHz while above Speccy shows it as 4004 MHz. The third way of specifying it is to use the nVidia Virtual Memory Clock rate of 4x the base frequency which of course is 8 Gbits/second or 8008 MHz.
I hope this helps rather that confuses the issue
So my GTX 1060 at the overclock speed is 9.6 Gbits/second virtual speed
Copy link to clipboard
Copied
I see that the GTX 1000 series has had some issues with Adobe. Is this still the case? Have the resolved these issues?
Copy link to clipboard
Copied
I have seen great results from GTX 1060, GTX 1070, and GTX 1080. If someone is having problems it generally is finding the right nVidia drivers.
Copy link to clipboard
Copied
I know you run the GTX 1060 - which version of the nVidia driver do you use?