Copy link to clipboard
Copied
This isn't actually focused on video editing or any such activity. But it is hardware related and may be of interest to the hardware geeks here. Before we go too much further: I'm still actually stumped as to what's going on here. I have a long-standing tier 2 case open with NVidia on it for more information, and they've not come back with anything useful yet.
Let's dig in. Again remember: this is NVidia GPU related, but not aimed at editors. Just "geek" stuff.
Back in July, I picked up the new Asus PG27UQ display. This is a 27" 4K display capable of pushing a 144Hz refresh rate. Key for: gaming. That is assuming you have the GPU(s) that can push 4K at high refresh rates. And I do: a pair of overclocked Titan X Pascals. Along with the new Asus, I grabbed 2 LG 4K/60Hz displays, and made it a trio. Cool.
Here's where the fun came in. I have a second PC in my office that does video capture. It has an Elgato 4K60 Pro capture card in it, which operates using HDMI 2.0. As soon as I connected that card to the HDMI output of my Titan, the fast Asus display got clocked down to 120Hz. Hmmmm. Interesting. This, mind you, before doing any display capturing or mirroring or anything like that. The mere act of connecting the capture card up (which looks like a 4th 4K display to the Titan) did the trick.
It's not the capture card's fault, either. If I disconnected one of the LG 4K/60Hz displays, the fast Asus display would clock itself back up to 144Hz. Some sort of GPU limitation. Hmm. Even more interesting.
I un-SLI'd my system so I had 2 independent Titans running. When I did that, I could move one of the LG displays to the 2nd card. All 4 displays would run at their max refresh and resolution. This would further indicate some sort of limitation with the big Titan card. A throughput limitation. Bus limitation? Pixel clock limitation?
Let's do some math. This is overly-simplified but it'll hopefully make sense, assuming I did my math correctly. 🙂 The overall throughput of all 4 displays running at their max rez/refresh should be something like this:
3840 pixels/line * 2160 lines/screen * 144 screens/second * 24 bits/pixel + 3 * (3840 pixels/line * 2160 lines/screen * 60 screens/second * 24 bits/pixel) == ~65Gbits/sec
What about when the fast panel gets cut down to 120Hz instead of 144?
3840 pixels/line * 2160 lines/screen * 120 screens/second * 24 bits/pixel + 3 * (3840 pixels/line * 2160 lines/screen * 60 screens/second * 24 bits/pixel) == ~60Gbits/sec
These numbers are interesting for a couple of reasons:
What about the pixel clock? It's not a stat NVidia has published for the Pascal cards. Again, it's all math (I think), and it's similar to the previous math but without the colors:
Perhaps there's a 2.5Gpx/sec clock rate limitation?
To solve my little speed/throughput/whatever problem, I picked up an el-cheapo GT-1030 GPU. It's a single slot, low profile, and very low performance. But it's enough to run 2 4K/60Hz panels just fine, leaving my Titans to focus on gaming. Silly that I had to do it.
Thoughts? Did it make sense? Any ideas WRT to the limitations I'm running into?
Have something to add?