Sorry, I think you are misunderstanding. I know I can't use Hardware Acceleration with the Xeon's only, I'm asking why I can't use those Xeon's + the 1060 for Hardware Acceleration. As per this document.
| Hardware-accelerated H.264 encoding | - Mac OS 10.13 (or later) on Mac hardware from 2016 or later
- Windows 10 with 6th Generation (or later) Intel® Core™ processors and Intel Graphics enabled
| | Hardware-accelerated HEVC encoding | - Mac OS 10.13 (or later) on Mac hardware from 2016 or later
- Windows 10 with 7th Generation (or later) Intel® Core™ processors and Intel Graphics enabled
|
|
Does the above not apply if I have a discrete graphics card?
Integrated graphics is REQUIRED and ENABLED in order for the H.264 hardware acceleration to even work at all. If you have a qualifying CPU but then DISABLE the Intel HD Graphics in the EFI, then you cannot have hardware H.264 encoding acceleration at all. And as I noted in my previous post, your dual-Xeons (or any high-end large-socket CPUs at all) does not have any integrated Intel HD graphics whatsoever, so you cannot have hardware acceleration at all.
So, the ONLY way to have both H.264 and MPE acceleration with a discrete GPU would have been to connect your monitors only to the discrete GPU ports but then force the Intel HD Graphics to ALWAYS ENABLED in the EFI. (The default setting for this is AUTO, which disables any GPU whose ports are not being used.) But if you connect a monitor to the motherboard video-out port, then you will "block" (or "lock out") the discrete GPU from ever using MPE GPU acceleration, and thus only the integrated Intel HD Graphics will be used for MPE acceleration.
Thus, with your dual Xeons you cannot have hardware-accelerated encoding in any case - even with a discrete GPU. (There is a third-party NVENC plugin for H.26x accelerated encoding via CUDA, but Adobe does not endorse or support it - at least not yet.)