Copy link to clipboard
Copied
Hi there, I have built the Transmitter example plugin provided in the C++ SDK 22.0 on macOS and when enabling the Transmitter in Mercury playback prefs, the playback is very slow and choppy. The same behaviour is not observed with the Transmitter SDK example on Windows.
In the Premiere Pro console, for a 25fps sequence this is returned (Avg Display FPS is always ~ 10):
<8067733> <UpdatePlaybackStatusInfo> <0> Frames dropped during playback: 0 / 28, Preroll(ms): 31.7009
<8067733> <UpdatePlaybackStatusInfo> <0> Avg Prefetch(ms): 4.31284, Avg Render(ms): 29.6625, Avg Display FPS: 9.07647
When the Transmitter is disabled in Premiere Pro prefs, playback is smooth and runs at 25fps as expected. This has been tested and confirmed on 3 Macs.
Upon further investigation in the transmitter SDK example, the the clock() function is used on both Mac and Windows to get the elapsed time but there's one big difference -- on Windows, this function returns wall time (i.e. the real-world time, like a usual timer), while on Mac it returns CPU time -- the amount of time processor spent doing some work, i.e. the busy time. AFAIK this is incorrect and the clock function of a plugin should provide a monotonic heartbeat to the host to drive it properly. With the approach in the SDK example, the final FPS value depends on the overall system load -- the load is higher -> intervals returned by the clock() function are longer -> Premiere is playing faster (or vice versa).
Can anyone from the SDK team comment on this or let me know where I'm going wrong?
Copy link to clipboard
Copied
I encountered the same problem, and after investigation, you were right to suspect a discrepancy between wall time and cpu time. The issue is nevertheless NOT situated in the usleep() function, but in the clock() function at the start of the loop. for SOME reason, behavior of this function is different between Linux(and therefore Macos) and Windows, where the Windows version of clock() uses wall time, while the Macos version by default uses cpu time, messing up the value sent to clockInstanceData->clockCallback()
I recommend using clock_gettime() instead, since you can specify which type of time to use. Here's the updated version:
while (clockInstanceData->clockCallback && *clockInstanceData->callbackContextPtr)
{
// Calculate time elapsed since last time we checked the clock
clock_gettime(CLOCK_REALTIME, &newTime);
unsigned long tempTimeElapsed = ((newTime.tv_sec - latestClockTime.tv_sec) * 1000000L +
(newTime.tv_nsec - latestClockTime.tv_nsec) / 1000L);
latestClockTime = newTime;
where newTime and latestClockTime are of type timespec instead of clock_t
I also recommend still changing the usleep() function for nanosleep(), as the latter uses wall time as intended and is generally more reliable