Copy link to clipboard
Copied
Hello, how are you?
I've been searching for a solution for a while, but I can't seen to find any.
The thing is... I've been capturing some gameplay footage, with files that last an hour or even more.
When I watch the files in any player (Windows Media Player, VLC, Media Player Classic) they play fine, the audio and the video seems to be on sync and OK.
But after I import to Adobe Premiere, it just gets out of sync. Even when I watch in the Source Monitor, before dragging to the timeline.
It looks like there is a problem when conforming the audio.
But even the time duration is different from the original, there is some frames or even seconds of difference inside Premiere.
And this happens with different codecs, AVI (from FRAPS), H.264 (with AAC audio and MP3)...
I've tried cleaning the cache, deleting the software and reinstalling again, converting - everything.
Some are saying this is a recurrent bug on Adobe Premiere. Isn't there any fix or something that I could do?
It's really strange that the problems only occur AFTER importing. Outside of Premiere is fine, so there is no problem with the capture, right?
I would really appreciate if someone helps me. Thanks a lot!
PS: I have Adobe Premiere Pro CS6.
Intel Core i7
12 GB RAM
GeForce GTX 580
HD 2 TB 7200RPM
Hey I don't know if you've found an answer to your problem from someone else yet, but I have had the same problem before, and I've finally found a fix that I'd like to share. I downloaded a video converter called Handbrake. (Mod 2021 update note: You can also use Shutter Encoder, which has more features and codecs than Handbrake).
In Handbrake I imported the video I wanted converted and changed an option from "variable framerate" to "constant framerate". After my webcam footage was converted i
...We've got a tutorial here on how to fix it when working with screen recorded footage! This helps with other non-camera formats too like Zoom recordings or gameplay footage.
Copy link to clipboard
Copied
Glad you got yours working. Others have had different issues. You had CFR
but you were setting a 30p frame rate in NTSC standard film count. When
you say 30fps or 30p, you're really saying 29.97fps average. This is a
simple explanation, going back to film history in the US. In most places
where PAL standards are the normal color and count list, you'll have exact
values, for much the same reason, just a few adjustments to the mechanism.
In film, or in celluloid days, you recorded audio and video separately and
synchronized as best you could. In the metal oxide days, or VHS\8mm\hi8,
you could record both in the same manner. However, there was a noticeable
delay in sampling and imaging already in many television cameras (using
electrical signal to broadcast), and bandwidth was precious, so rather than
up the transfer bandwidth and frequency (more power needed), many cameras
employed a slight delay in audio. When VHS came around it went the
opposite way. Of course the main speeds were 24i, or 23.93fields (halved
picture sets of tv lines) per second. This delay came as a result of
separate heads recording, and sending at different rates or frequencies.
The heads were independent, you could mute sound input and the head
wouldn't touch the tape. All this mechanism introduced slight sync
problems. By dropping the rate down to an average, the problem became
unnoticeable. That way the US way. When other nations wanted to be more
exact, the set the rates, and every so often time stretched and compressed
to fit. 25p, 30pPAL, 50p, 60pPAL; these were used mainly in Europe and
eastern nations. The difference isn't much, some bandwidth, and data, but
their mechanism corrects itself.
Just a note, 60p in the us is 59.94fps.
It's a hold over from sync problems in early film. Even today, the exact
frame rate isn't perfect in digital, and sync problems do occur at higher
rates. Drop frame is a great way to adjust, stretching the problem over so
long a string, it's infinitesimally small, and completely unnoticeable.
But the NTSC color schemas... ...yuck! "Never the same color" = NTSC
Glad you realized it was your set rate. For those with a similar issue,
most HD is drop frame, but a lot of telecoms speed hike or create the
frame. If you've set your sequence to a rate of exactly 30fps, and are out
of sync, try droping the sequence rate to 29.97. Some game capture devices
don't properly encode this value, hoping you'll use imovie or some other
home software that will re-time your audio for you, sensing they are out of
sync by checking track data and timecode. If you want to stay closer in
sync, try a 24p rate. This is standard video rate, but it matches up with
48k audio sampling in 24 exact, and 44.1k in drop frame.
For future reference, a pro workflow (for pro software like this) looks
similar to:
1. receive video
2. ingest video
3. process video
4. output video
2 and 3 have a lot more to them.
Ingesting is supposed to get rid of any sync problems, and you can set your
video rate as you please. Going from 29.97 to 30 isn't that big of a leap,
and AME can handle it if you turn on Frame blending in the options. I
usually output 2 files at once from ingestion (2 for each video clip), one
for editing and one for final pass through. There is a difference in the
time it takes to render. More data going in always takes longer, but the
output usually looks a little better. Ingesting video takes the guesswork
out of the whole thing. Ripping to a single codec saves render time, as it
doesn't need to open a new decoder for each clip. Color grading is better
when you use a heavily compressed and a lightly compressed version of the
same codec, and your end quality will be more accurate.
Processing your video requires precision in some instances. Knowing the
timecode's exact place, and that each and every video complies with that
speed means that your machine isn't constantly re-timing playback during
your edits.
It would be great if you could get perfect output from every video codec at
whatever speed and audio sample rate was preset all the time, just on a
whim. Professionals rely on fact, figure, number, and naming. They rely
on themselves, not everybody else, to do their work right. That's what
this software targets. It's important to remember that.
I remember watching an engineer with a PHD in Physics trying to get data
from his TI calculator to his pc to run in a program he built. He kept
crashing out of the program, and having to re-enter all the data. He
insisted there was something wrong with the calculator or the pc. There
was... ...He was using data measured in radians in a 4 dimensional plot in
his math, but his calculator was set to degrees, and kept sending error
data back to the pc, and his program would end up with bad output or a
crash. He had no Idea how to set the thing to radians. He read that it
computed radians, assumed it was the norm and went on pulling his hair
out. It never dawned on him to learn the tool before using it. After I
showed him how to set it, he sprang for a campus cup of coffee. Colleges
do worse than starbucks...
For those using Elements software, remember that it is derived from the
same core as the professional set. It is called PROSUMER grade, not
perfectly pro, but far above consumer grade. Saying "but isn't that
different?" means you should've bought the $20 POS your kid said was trash,
and been happy with it. It assumes there are problems, runs slower, and
you have to pay out to add functions to it, but it assumes no knowledge on
your part. It will baby you through it.
Personally, I like learning pro software and using it at the same time.
It's a walk back through history, and older workflows that were spot on for
"Perfectionism" and professional quality. It takes more prep, but once
through, the prep-stage makes you ready for everything else to happen at
tachyon speeds (particles traveling faster than light are called tachyons,
by the way).
Processing requires tagging and knowlege of each clip or subclip, so you
can quickly move them around. Nobody wants to worry about whether or not
the framerates line up, and you don't want your system to give you the
spinner (a computer's middle finger) that says it has to reprocess
something over and over again. It's harder on hardware, and on your time,
and your budget... ...not to mention your reputation. I urge you all to
adopt an ingest step, where you tag or name your clips as you export them
to an edit format. It will eliminate this problem completely. No more
guessing.
Pop goes the weasel... ...that ran away with my sanity.
Copy link to clipboard
Copied
I have had the same problem, and the answer was very helpful. However, I have been having similar problems with plain mp3 files. I have had audio that I want to insert under a video, but on import, the audio has glitched out very strangely. It will take some audio from one place, and slap it in a completely random place. And then, it will cut out and have a few frames of silence, before another random cut out. It only happens on a specific computer, and any others it works fine. Same thing, it is fine before, but on import, it does not work. I would assume if it was a problem with conforming, then closing the project would work fine, because it reconforms on every reopen. Is there any way to reconform the audio? I have tried copying the file, re importing, nothing is working. Are these problems even from the same issue? any help would be appreciated.
Copy link to clipboard
Copied
i simply zoomed into the spot where the lips are not in sync and i unlinked the audio from the clip and then i tweaked it a little until it was perfect, then liked the videos together again, hope this simple fix helps!
Copy link to clipboard
Copied
Also previously mentioned but probably buried in earlier posts. I would recommend using Handbrake as the variable frame rate is what is problematic when using media that is internet or cell phone sourced as I discovered when trying to import files that I had captured and then downloaded after a live stream session on ustream. The video would play fine in Window and QT Media Players, but the audio was out of sync in Premiere.
Once you open Handbrake, and import your footage, there is a tab under Video that you enable to make the frame rate constant and then you export the footage again. It was now fine when I imported it into PrePro.
Copy link to clipboard
Copied
I've had success renaming the .mp4 file extension to .m4v. .m4v is Apple's version of .mp4 and Premiere likes it better for some reason, no more out of sync audio. If this worked for you be sure to let me know!
Copy link to clipboard
Copied
The Handbrake solution worked for me. A few tips.
Set the "Chapters" to "seconds."
Under "Video" tab click on "Constant Frame Rate"
The weird part? This doesn't always happen. I do a show using Youtube Streaming. I download the MP4 file and edit it. in PE14. Sometimes it is fine sometimes horribly out of sync. You figures.
For a long time I had blamed Google for screwing up the snyc until I started viewing my file in another viewer to see it was fine.
Come on Adobe. For those of us who use so many of your products--FIX this.
Also, the MTS files from my Canon Vixia are fine.
www.ohioraamshow.com
Copy link to clipboard
Copied
Hey there Lee in Ohio,
Come on Adobe. For those of us who use so many of your products--FIX this.
Understood. Getting there on getting this feature implemented. As your advocate for this issue, I request that more of you file feature requests. Have you filed yours yet? Anyone else on this thread, we need more of you to request support for variable frame rate support. Your attention to this issue is appreciated.
Regards,
Kevin
Copy link to clipboard
Copied
I have had this problem in every variety you can imagine, and every time the causes and solutions are completely different (if I can even figure it out). After hours of research, it appears bailzmc1​ is mostly, if not completely correct. During my research, I've found that this particular problem seems to be directly linked to MP4 files - possibly because MP4 is the only format that supports a variable frame rate (though that's only a guess).
Regardless, I haven't yet tried Handbreak, but I was able to solve the problem by putting my video files through Apple's Compressor app and essentially rewrapping the MP4 files into MOV. If anyone wants to give Compressor a shot, I would advise using the "HD 1080p" setting in the "Video Sharing Services" group because it keeps your MOV files about the same size as the original MP4. Otherwise, using something like Apple ProRes will turn a 170MB file into a 1.6GB file. I'm an avid PC user, but fortunately I work in a studio full of iMacs.
PS: I was using a Logitech C922x and the Windows 10 Camera app.
Copy link to clipboard
Copied
I found an easy fix. Change the .mp4 suffix on your file to .mov. Import the .mov into Premiere and the audio is in sync. Easy fix.
Copy link to clipboard
Copied
That's not a fix. That only works if the MP4 file is .MOV compliant. Not all cameras actually produce a fully compliant file. That's why the more accurate and permanent fix is running an INGEST type conversion.
Ingestion of data has been a part of the workflow since the beginning of digital formats, all the way back to the beginning. You could capture with anything, but you had to run it through a machine that ingested the signal into your editor program, then run the final product back out. Compression wasn't the issue, playback was. The linear edit styles had editors running the video hundreds of times, creating lists of edits, then running the output to a master media that was later used to create new copies.
Today, there are many formats that differ only slightly in their algorithms that compress the video and the full frames that are used to compress the others. The algorithms used, the variable settings and the chosen values for those settings are all factored into the algorithms. Those settings play a huge role in how compatible the actual data is with different wrappers. One camera might create files you can just change the extensions on to make them work, where other cameras typically don't--in which case, changing the extension could completely destroy the usability of the file, or just make it unreadable until you change the extension back.
Ingestion of digital files for nonlinear editting is an essential part of the process. Different cameras will have different color palettes. Pre-grading angles is the first step, but can be done by simply converting all your shots to the format with the smallest or most narrow palette. The next step is balancing them so they match, and finally you convert those to the format you'll use for edits, and the format you'll use for output. Edit formats are smaller compression and faster playback with lower quality, but the final product versions need to be as full-format as possible, so they don't lose quality in recompression of the output. This also removes the step of decompressing the edit files to compress them back again, and saves hours upon hours of render time. This all turns a weeklong job into one that takes about 3 days. That's just for the video of course, but you get the point.
Copy link to clipboard
Copied
Hey, bro, you preach to the choir. I used the adobe "frame blending" technique back in the days of CS6. It took care of the frame rates and sync issues. Today, it actually just reads the change in frame rate and does nothing!!! Well... ...unless you change the format, the data rate or some other major factor; alter these and it will reformat the video entirely, rebuilding the frames.
The Quicktime or Compressor version of this operation is much easier, and they build better frames overall (motion adaptation algorithm is great but use as light a touch as possible; 1-10%).
Right on.
Of course this is akin to INGESTING your data, which is just good practice IMHO.
Copy link to clipboard
Copied
Thanks for that answer! Helped me big time!
Copy link to clipboard
Copied
old thread, but this may help some who are still having the issue:
How to Fix Out of Sync Imports in Adobe Premiere Pro (Mp4 video problems) - YouTube
Copy link to clipboard
Copied
Converting in Handbrake works really well. Thanks for the great tip!
Copy link to clipboard
Copied
Handbrake is a great program for an input to output solution. Just read up fellas. It has a few caveats but with some practice and some study, you can get fast at it. You can even make your own presets, much like most variants of the same type of software.
To recap:
VFR is an old format for carry, not delivery. Talked to some old hat cameramen. Cameras often had the problem when the electronic stabilizer was left on, prompting the creation of lens\sensor mount replacements that had a hardware stabilizer and an overscan version of the sensor that would avoid the problem. Cheaper broadcast systems still had to deal with it, which lead to a standardized FLAGGED FRAME metadata, and signal boxes that would read it (there were boxes before but you had to match camera and signal box). Most desktop playback software can read and adjust with it, even from older cameras, but most network services require you to purchase more resources from their service to handle it if you don't handle it yourself. WIth movie production companies, they had to INGEST DAILIES, which meant sending to a preview machine, which would have an indicator where frames were dropped, and would double a previous frame for previewing. From there, rough cuts were determined, and they were sent to an Ingest process that would either blend a frame or use a double frame. For short drops, a double is ok. For longer blends are better in between, but you really want to mix both in some instances. Interlacing made this easier, but had jagged edges with faster subject motion, and you had to have a high frame rate, along with some edge blur to hide the aliasing.
Signal Input boxes that handle flagged frame do one of two things:
A. they introduce a lag of 5-120 frames (considering up to 60fps) to allow time to double previous or after frames for the missing ones
B. They hold the image in the output buffer and use a PULL on the output stream to grab the buffer, effectively grabbing the same frame twice (this is the passive version with no lag).
I've emailed a few manufactures in the medium area of the market. All quote one or the other. Both are considered b-cast ready. These are better to use and with wifi connectors for HDMI, they are probably becoming more feasible for live shoots, or even camcorders, as they will output constant frame rates to a computer for recording.
Gamers have a real problem. Most don't want to accept that there's a big difference in render overlay and sensor capture. With a sensor, any stabilization can cause a drop here or there, but it's short and easy to mask with a simple AVS command feeding into any program; you can get started at the drop of a few lines of text (not perfect but close; and you can do a quick pass with AME outputting most with a match input, then setting your own frame rate and adding frame blending at the bottom checkbox; An hour of video at 4k took me about 15min to fix a lot of camera shake with a gaming style notebook of recent gen).
Most gamers want to use lesser setups and built in render capture running in their gfx renderer, but they don't realize that it's at the mercy of the game rendering. IF the frame rate or action drop below the set frame rate in the render cap program, it will not fix the drop in frames, and they will have to run it through handbrake or AME; it will take a lot longer to fix, and handbrake gets choppy with subtle movements at the end of the drop. AME gets blurry. External capture is always best. I like Elgato, but they also tell you to max out your system for constant frame rate, and if you want to stream, you also have to max it out to get the flagged frame to work properly. This means you may need a second computer (actually, you should have 2 computers either way, I can run playback and FRAPS on a second netbook atom 1.2ghz and get 1080p at 30fps solid with only 2gb ram and xp os; I can buy those on ebay for $50!!! Spend 200 and you should be able to push fraps to 60p with 4-8gb ram, i5 and usb or usb3).
Elgato has a 60fps usb3 connector, and I'd hook it to any hdmi source (even another computer), but it requires maxed out resources with an i5 16gb and 2-4gb gfx card for 60p at high resolution.
Copy link to clipboard
Copied
Handbrake worked for me.
ADOBE: This really is unacceptable. Do you guys (at Adobe) know how many times I encoded my clip, stripped off the audio, tried to hand sync it.........? Come ON! As much as you guys update these apps, let's get an update for this. Its a major problem and a huge time waster and the fact that you have to go get some OTHER software to work around it is just crazy. A lot of companies lock a users system down and they can't go out to one of these seedy download sites and just grab Handbrake and install it. Thank goodness mine isn't one of them.
Copy link to clipboard
Copied
I love you for this, thank you!
Copy link to clipboard
Copied
They are working on fixing this problem!!!!! Hooray!!! I was tapped for some video, but I've got 4 jobs, 7 college courses, and some nonprofit orgs putting on the ritz for the season. I just picked up a game for grabbing video from gameplay, and I'm going to send some in as soon as I can. If you have any short 30s clips that have this problem, link me in over your cloud and let me link the others working on this! It will go a long way toward fixing the issue.
I hope they don't just leave the problem in the video and process it out that way, like many other programs do. I hope they at least put in something that notes it and asks you to choose your fix at the output stage. For me, handbrake works, but I get great results from media encoder using frame blending for slower action shots. There's a lot more responsibility to shoot well, and it doesn't clip the audio around it. Audio is supremely important for my purposes.
With action shots, even more responsibility to shoot well. If you can get preempt your equipment (know it well enough), and you shoot 4k for punching in to 1080, you can use optical flow. Make sure you keep the background as steady as possible, and the subject across the center third box during motion and shake (don't follow them perfectly, but allow them to slowly creep ahead of you). It will take a while, but the render should calculate what pixels change the most between dropped frames, copy the ones that move least, and then place the others accordingly. For medium and straight line motion, this works pretty well.
Handbrake can calculate a decent frame, but it also adjusts audio when there are too many frames in a row to calculate clearly. Intelligent, and better for higher action, not recital or performance videos that have a necessity for decent sound and smooth clarity.
Handbrake for video games is great, for skateboarding or thrill seeking. But if you utilize another audio source, use media encoder, and drop 2 outputs in high compression: 1 with frame blending and another with optical flow.
Optical flow also works amazing with still backgrounds where the camera shakes but you have a solid background color across at least most of the frame. Sky, or even when a camera is on a tripod and bumped hard enough to kick the EIS into action.
Of course, these are just stopgap until the solution comes through.
Copy link to clipboard
Copied
Thank you!!! This worked
Copy link to clipboard
Copied
This issue still persists. Just record a video clip on an iPhone and try to work with it in Premiere. Adobe - how much disrespect can you have towards your customers? Pulling money out of people's pockets every month and can't even fix the most basic bugs for years?
Copy link to clipboard
Copied
After trying several of the solutions here to fix my QuickTime Player movie recording being out of sync after saving it, I had to choose Export > {resolution}... and Premiere Pro then reads it in just fine.
Copy link to clipboard
Copied
Wow. Thank you so much. It has solved my problem
Copy link to clipboard
Copied
This issue does not reside in Davinci Resolve. I just added the long iphone video that gets out of sync in premiere to davinci resolve and there is no issue. Avid does not have the issue as well. Premiere seems to be the one that gets tripped up with mulit-framerates.
Copy link to clipboard
Copied
I used http://mirovideoconverter.com
which also works converting it.
Has anyone tried Adobe Media Encoder?
Copy link to clipboard
Copied
I've tried Media Encoder without success. This problem just cropped up for me in the last couple of months after a PPro update. Now .MST and .mp4 videos that play with synced audio in other players are out of sync in PPro and I have to manually adjust the audio timing. I am guessing that maybe a codec has changed somehow, but haven't found the answer yet.