I've filmed an event with two cameras and we've mistaken the PAL/NTSC settings. We have the following:
Camera 1: 23.976 fps (continous recording of 2 hours @ 200GB)
Camera 2: 25 fps (±250 short clips @ 600GB)
Audio recording (separate recording of full event): 48 kHz (2 hours)
We used Camera 1 for a continuous recording of the event. We'd like to synchronise the audio with Camera 1, and then overlay the short clips of Camera 2 as B Roll (while also syncing them to the audio).
Should I transcode the 23.976 fps footage to 25 fps and work in a 25 fps timeline? or should I stranscode the 25 fps footage to 23.976 fps and work in a 23.976 timeline? And how should I do that? With Media Encoder?
Another curveball, given the footage is so large, I want to create proxies. I'm aware I should not mess with framerates after creating proxies. So, should I first transcode, and then create proxies?
if you change the interpret fps, it will change the duration of the video. you can conform the video into a different sequence fps and then right click pixel motion to optically change framerate.(or export time remapping optical flow in media encoder)
so it's up to you to change either the audio to match the video speed with interpret fps or use optical flow.
and you're right, proxies don't work with change fps, so you could use high quality proxies as full offline mode if you wanted, that already had burned in optical flow. or keep interpret fps same and export optical flow on export.
just do a quick sync test and see what you like the best.