AME Exports get truncated or have black added so end up significantly different lengths.
Hi all,
I'm not posting this as a bug just yet as we're having trouble replicating it reliably - but we just set up a new Mac Mini with a M2 Pro processor for our editing team that's having very weird issues. System is running the latest versions of PP / AME (we have just updated to the new AME patch that came out a couple of hours ago - don't know yet if that will make a difference).
Our workflow is to take a ~10-minute source file and then make many dozens of slightly different variants (some have some small sections removed, or need certain text or graphics cards replaced with a slightly different wording or in different languages... etc) so all the variants are from the same source material, but end up different lengths, between 8-10 minutes each. Each variant gets added to the AME export queue to export as a ProRes (nothing fancy, just very basic 1080p / Stereo / ProRes 422).
For some reason it turns out many of the resulting exports over the last couple of days have been incorrect lengths. We checked the first few outputs from the new system very carefully and everything was fine - but it looks like there are many intermittent issues with output files being the wrong length: Some of the longer versions cut off too early, some of the shorter ones have had black padding added to the end to make them longer. There doesn't seem to be any pattern to the ones that have issues vs the ones that don't, but it's more than half of the exports at this point.
No Input/Outputs were specified for the renders - just "render to sequence length".
Does anyone have any ideas what could be causing this? Or are there any known issues with M2 systems / the most recent AME versions (all our other machines are on older versions of Premiere for compatability reasons so this is our only system running 2023 software or Apple silicon).
We're doing some more testing today to try and see if we can reliably replicate the glitch (right now our best guess is that it might have something to do with the number of jobs in the queue, as running 1 at a time so far hasn't had any problems.
If there's any advice of other things we should be looking at, would love to hear it - as this has us pretty stumped.
