• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
Locked
3

Variable frame rate video with Premiere Pro

Explorer ,
Dec 04, 2012 Dec 04, 2012

Copy link to clipboard

Copied

Variable frame rate video comes from many places these days: phones, live streamed video recordings.

Adobe Premiere is a supposedly production level piece of software that cost a good chunk of change.

How is it 2012 and Adobe does not still have an answer to this problem?  After trying to editing/convert/mux/edit variable frame rate videos for the past 5 hours I am just exhausted.  No amount of conversion apps, etc have saved us and THEY SHOULD NOT HAVE TOO.

We have spent thousands on Adobe software packages over the last decade, probably 10s of thousands, and the only answer I find consistently is to switch to Vegas.

Surely, SURELY someone at Adobe with real insight into the issue can help answer the question of whether users moving into different medium should find a place elsewhere in the software ecosystem...

Message was edited by: Kevin Monahan

Reason: to make article more searchable

Title changed.

Views

170.2K

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines

correct answers 1 Correct answer

Adobe Employee , Jan 22, 2018 Jan 22, 2018

Hi rmshro0,

You can now work with variable frame rate video in Premiere Pro CC 2018 (12.0.1). Feel free to download that version from Creative Cloud. More info here: New features summary for the January 2018 and October 2017 releases of Adobe Premiere Pro CC.

 

I apologize that it took so long.

 

Note: if the video streams are too long, or the frame rate varies too widely, you may have to convert them in either Shutter Encoder or Handbrake

 

Need more info? See this article: FAQ: How to work with Variable Frame Rate (VFR) media in Premiere Pro?

...

Votes

Translate

Translate
replies 319 Replies 319
Explorer ,
Jul 25, 2017 Jul 25, 2017

Copy link to clipboard

Copied

To be perfectly honest, I don't really know exactly.  I am an animator/editor, quite relatively low on the totem pole of a company this big. I don't get these projects too often, but when I do, my job is to get the footage, solve the problem and send out a deliverable.

Getting on a soapbox and declaring that the footage is bad and that everything needs to be redone on better equipment, will get one response and one response only: "Fire that guy, and hire someone who can shut up and provide results."

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jul 25, 2017 Jul 25, 2017

Copy link to clipboard

Copied

I can see your troubles ... and of course, there's a lot of user-provided phone media used on news broadcasts also these days, saving the company the need to like, actually ... employ videographers/photographers to go shoot things. So it's understandable they're sticking someone with the task of working that media into their overall output.

You have my sympathies on several levels. And technically especially ... that's sucky media for editing. Well ... if you don't "see" it closely, you won't notice the skipped frames, created artifacts & such ...

And I do understand why you just need something that ​works​.

Neil

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Jul 27, 2017 Jul 27, 2017

Copy link to clipboard

Copied

I run post production for a VR company, and many VR cameras on the market shoot VFR.  So professional or not, some tools out there only shoot VFR.

The only solution we've found is to convert the videos with Handbrake.  I've tried other converters--Adobe Media Encoder, Apple Compressor, etc.  Handbrake is the only one we've found success with so far.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jul 27, 2017 Jul 27, 2017

Copy link to clipboard

Copied

Yea ... I've tried others also. Handbrake using my own presets just works.

For the person up above, getting stuff assigned to them that needs to go right back out, the need to convert to CFR, especially when the output might be better streamed in VFR, would be problematic.

Wish there were an easy and fast solution.

Neil

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Jul 03, 2017 Jul 03, 2017

Copy link to clipboard

Copied

Okay...  I just attempted a few trials with some shadowplay on my acer, and I ran some stuff in the background to get the framerate to stutter in game.  I got the same vfr that most gamers are seeing.  There's another problem, though.  The kid living next door to my parents worked with some game devs over at Blizzard for a short time.  He told me that they often dropped frame rates of faster motion by dropping the in between frames, rendering one step down from the master rate, or simply adjusting by single frame count to account for speed changes so that the rendering engine could keep up.  Even when everything is ok, its the rendering that's causing the problem.  I thought I'd expand on that, since I mentioned it in my last comment.

I played with the ingest methods in premiere and in AME.  I found a faster way to import the video, without having to completely reformat.  It should just skip over frames that are already rendered, and instead, only fix those frames that need it.  They also have a new interpolation mode called optical flow that does a bit better than the frame blend where the background isn't changing too fast, but you're usually better off with frame blending, as it's the middle ground between basic frame sampling and optical flow, providing blended motion where there's more action in the frame.  I found a fast method.  I used an emulator, so it's old school gaming, but the graphics engine to run the blow up was running too, and that put a load on it.  I set my frames for 24 and bitrate to 40 (this gave exquisite quality and sped up the format further, but the file was huge).  I outputted to an external ssd over usb3, so no speed problems in the transfer (I benchmarked large file transfer to about 350mbps, almost 10x what I needed).

So I started with an h.264 file with VFR at 40mbps, and I ended with that... ...In about 20min for an hour of footage.

Here's how to make the most of an ingest operation-->CREATE A PRESET IN AME for encoding, and Another one for INGESTING (You'll see the reason for the second one near the end).

The encoding preset should match the file type you usually output, so create a new encoding mapping, give it a name, and base it on the same file format as your input file.  In the settings, you'll uncheck the box next to frame rate and set it yourself (This activates the time reinterpolation algorithm that will fix your VFR problems only where they exist).  Now, you should set every other checkbox to get it from the source file, and set your bitrate to match (set the maximum to the same as your capture rate, and your target from 1-15mbps lower).  Save your preset.  This will be useful in the next step.

Create an INGEST preset, in it's settings, point it to your new encoding preset.  Save it, then export it to your documents folder or your desktop.  In premiere, you'll pick your video in the media browser, then click the ingest checkbox, and then click the wrench icon next to it to pick the settings.  You can now import a custom copy ingest, transcode ingest or other... ...I suggest transcode, and add ingest button, then pick your settings you just exported.  Once imported, you should be able to use it over and over.  On ingestion, premiere should use AME to encode the video in the background, allowing you to get started in Premiere; you will still see sync problems, but only until the file is fully replaced.  Once the file is done, you should see a change, if you don't, unlink your file, and relink to your new one.  Done.  It took about 15 minutes for an hour of video at 40mbps (low compression rate really does make a huge file, but allows you to recompress it quickly for use in premiere.  15minutes is fast turnaround.  I would consider that to be support for VFR.  But you might not...

I sent some requests to some dev friends, and they got back to me today.

VEGAS, PINNACLE, and several others use a playback engine for rendering in the editor; premiere uses a FRAME engine (that's how he put it).  The difference is that the playback engine used in VEGAS is based around increments of time, not frames, which can prove disastrous when you upload to a service that's frame based (Most are and youtube will apply it's own fix when you upload, VIMEO users have still been having issues).  Unfortunately, handling VFR like I have proved problematic in PREMIERE, and in VEGAS.  In premiere however, I could do away with the problem with an INGEST.  In Vegas, I can't do anything.  It will render frame by frame, and line up the sound samples where they belong, marking reframes for the drops, but doing nothing about it.

Another thing about VFR... ...IT VARIES.  Sometimes the rate drops to a useable frame rate, as is the case with most of the gaming drop, but other times it drops by one or two based on the rendering load, which requires a different handling.  You can't use only one type of solution for the video, it needs to be analyzed and the proper method run for each individual instance of reframe.

At least now I know what's missing from VEGAS.  I put the file through it, and exported to a new format with different bitrate, etc.  I ended up with the same problem.  VFR.  It was evident when I ran it through Premiere.  The audio slips out of sync when there arent enough frames to match the rate.  The files still come out broken.  It may work for uploading to Youtube, but it forces you to wait longer for the video to be available.  I've uploaded VFR video to youtube before, and the original file is deleted, in favor of a full reencode.  I couldn't download it from my GoogleDRIVE until it had finished processing.  Once it was done, I was able to get it.  I've uploaded other videos with CFR that were available almost instantly, both in Youtube and in drive.

I can't have an unreliable timing effect on the final product of my work.  When I upload a file, I need to know that it works right away.  Input is a different story.  I can easily push an all-nighter to meet a deadline, or set and forget an ingest for a few hours while I go run erroands, I have a life outside of work and video editing, so ingesting isn't an inconvenience.  It's more of an automatic pilot thing.  I just do it every time.  And now I can even do that in Premiere and AME... ...SO I can choose which one to start with.  With tighter deadlines, I'll go with PRELUDE every time, so I can get a few cuts going, then just relink, because, hey, I may still be learning the intricacy of the new interface, but I actually know what I'm doing with it and why it works that way.

There's been talk that google may begin rejecting VFR video entirely.  They only held onto it as a carry over from bandwidth saving playback engines.  Now that rendering with blends and motion estimation is the norm, they're trying to get rid of it.  They may not completely dump it, keeping it for only a few formats that are RENDER ONLY formats for straight up graphics renders.  You may want to get used to ingesting your video.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Jul 23, 2017 Jul 23, 2017

Copy link to clipboard

Copied

I've heard it in a rumor mill or two that some guys are trying to code a frameserver that uses a push style input but generates an MTS based video.  If it's true, and if what was described is actually in the works, the video and audio will either be independently encoded or muxed in such a way that the audio length on clips will be different than video length.  This was problematic back in the old days of BDMV, as the MTS files had linked audio encoded for stitching, and the descriptor in the folder structure pointed to the stitch points where audio was supposed to mesh together.  Without the stitch point descriptor, which is what you ended up with if you dragged in only your MTS files, your audio would break up.  In this new implementation, from what I heard, you'd get a BDMV like file, with clips inside that cut at either the file length point or a point where the video changed rate.  If they decide to separate audio, the audio file will be in its own file alongside; but if they decide to mux the two, they are intending to use the "Long stretch" style of encoding, making the audio longer at both ends by 1 - X samples, where X is the number of samples per second (so longer by 1s at each end of each clip).  There are other considerations to think about here...  If they get it working, will there be an extra blank 1s of video at each end, or will the two simply be described separately in the header?  Will the full file actually import as one file, or will it import as a folder of clips?  To support it like bdmv, the stitching of the audio would be necessary to note in the structure somewhere, as part of the wrapper folder like BDMV\AVCHD or as a sidecar file included in the folder structure itself.

It may be just a flight of Fantasy, so I won't hold my breath.  I have my doubts but I thought it was strange that I outlined that function here and then read about it on a dev forum only a few days later.  It might not work, but hey, it might work.  They might do nothing with it, they might enable VFR for the whole world in a new way.  Again, I have doubts.  Not just because of VFR.

As for supporting VFR video... ...I just ran a buddy's shadowplay of an FPS through.  It had severe problems with audio sync.  I trashed the sequence.  Most of the frame interpolation has been removed... ...I tried in CS6 on an old machine.  I was able to get the video to work, but it still played back very slow... ...Old machine remember?  Adobe used to support an "Interpret Footage" command that actually used the After effects base engine to rebuild the frames (a watered down version of the engine really).  I checked while it was run with a system tool and checked what each file was that it was actually using.  There's a file in the old CS6 that gives premiere a basic engine for rebuilding the frames, and it's tied to after effects.  So I'll run the video through after effects, and try to interpret the footage there later.  Maybe somebody can figure this one out for adding the feature back.

for the feature request, please have them add the time interpolation back to the INTERPRET FOOTAGE dialogue.  They may have removed it to keep things modular so that they could separate Premiere from After effects, but It was there before and It does work.

For all of you with VFR problems, you'll have to download the Premiere CS6 Version to test this yourself.  There's a dialogue that will ask about enhancement or interpolation, pick frame blending.  It doesn't have Optical flow, so you're stuck with blending as your best option.  Optical flow doesn't work well on games anyway.  Frame sampling works only in rare cases, but not when in premiere itself (it just samples existing frames--go figure).  Try downloading CS6 thorugh the cloud app and work with your file.  I got it to work.  I'll be downloading this just to ensure I can work with VFR.

So Here's where we stand:

Method 1.  Use Prelude, AME, or Premiere latest to INGEST footage----You can create a preset that keeps your video files looking great, you get started working, even though the initial video shows sync issues (ignore it, edit video only).  I tested this, and after a relink, the edits were still intact at the proper frame for video, but if I edited audio at those points, the edit remained at that point on the audio track.

Method 2.  Use After effects and interpret the footage to a comp, then use the comp in your sequence in premiere.  This does a great job when you use frame blends.  Optical flow is less useful with gaming that has a lot of background motion.

Method 3.  Build time into your workflow for interpreting your video by ingesting it.

Method 4.  Go expensive on the input side, and use a separate machine with a frameserver that uses it's own processing to correct the frame rate.  This can be done with a cheap PC and an input device.  You'll also need a splitter for your video signal.

Method 5. Use cheap software that "Supports" VFR (they only let you edit it, they don't fix it), and only upload to Youtube, wait the extra time for "Processing" then link to that at other sites you use.

Cheap out on one end of the workflow, expensive on the other.  It's that simple.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Jul 29, 2017 Jul 29, 2017

Copy link to clipboard

Copied

I've tried applying the frame blending to my clips in sequences and rendering previews of the areas where I notice the sync falling out.  It works fine.  I have to render areas of about 10s which usually takes about 10s.  So there is, in fact, support for this, but you have to fix it on the fly, yourself.  Video with several frame rates for longer stretches, however, don't work well.  I set up an emulator for an n-64 system, set to grab the actual frame rate of the game, and then ran a freeware frameserver that uses push grab to get the video from the app.  It caught the frame rate, as expected, and did nothing to it.  Passing through Premiere, I used the blend method in the sequence... ..Only to find that it didn't work correctly.  I had to render all the previews, and even then there were some small jitters.  Gaming is a problematic form, as it's mostly render graphics at rates that change due to the render sync of each overlayed object.  However, I set FRAPS, things heated up a bit and the game did lag a bit, but I set the frame rate and got that frame rate.  I then opened a couple of TVsets with my dad's help.  He's an engineer, so I enlisted his opinion.  We traced two chips.  One from an old crt, and one from a newer LED.  LED's use a chip with a processing unit on the Composite\Component output that has a memory chip that looks like a buffer unit.  On the CRT, there's a simpler chip.  We then put them back together.  The LED has a high max rate above 200.  The CRT has no listed rate (it just uses the analog signal).  We hooked a PS4 to it, downloaded a frame rate utility to view the rates, and played call of duty.  On the CRT, we get a report of 144fps.  On the led, we get the same.  I opened an old VHS with a similar chip, made by the same company, phillips, around the same year as the CRT.  I connected the PS4, then passed that into a frameserver on my computer, and got a listing of exactly 24fps.  Basically, the frame rate wasn't changing.  The output signal was heavily blurred.  Over Component, however, the PS4 listed 120 or less the whole time, and I got a signal of 60fps interlaced.  Big difference.  Without the VHS player, another test directly into the computer gives 144fps max and the rate changes constantly.

I then tried a similar test with an HD Antenna, and did it the same way into the computer.  Over Component, I got 30 and 60 FPS standard with no changes.  Over a cable input, I got the same.

This bears out my earlier explanation, that rendering graphics at changing rates due to overlay sync creates long stretches of GOP's with different frame rates and it breaks support for this.

For those with camera based VFR, drop your video into a sequence all it's own, right click on the clip in the sequence, tell it to use Frame Blending, scrub through to where the problem becomes visible, then go back 5-7s, and forward from that point by 5s, mark this area and render the previews.  It works for me.  Drop the sequence anywhere, it will playback fine.  I really had to search and test this one.  But it does work for in-camera VFR.

Hey, Mr. NBC, are you sure you're not talking about going from Variable Bit Rate to Constant?  I've run across this too.  Broadcasters have used that to match bandwidth limits.  Signal is passed through a frameserver in the control system that processes each GOP several times, and shoots out the video at a set rate, with the audio.  Multiple passes allow it to keep excellent quality at small bandwidth.  I got my doubts about whether or not they still use it at NBC, but if you are forced to "Fix the problem", and only a few of these come in, you get my meaning.  Most broadcast vans put this through a stream that is handled by a frameserver that reframes the video on the fly.  It's a bit technical, but it works because the missing frames are for extremely short drops, or they are set to keep bandwidth small.  If you get it a lot, talk to your techs about putting in stream based frameservers, $1k each and you'll have a decent system for it.  They reprocess the data and clean up the framing for you, so you no longer have to.  They will output a live stream feed that you can edit or broadcast with a short delay, long enough to handle reframing on the fly.  Most control consoles are a version of this with a midstream interupt that lets you edit on the fly.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Aug 07, 2017 Aug 07, 2017

Copy link to clipboard

Copied

Yes, Mr. Prelude.  I'm sure that I'm not talking about Variable Bit Rate to Constant.  I've been an editor for about 15 years now.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Aug 23, 2017 Aug 23, 2017

Copy link to clipboard

Copied

I knew a video receiver who didn't know the difference when I asked him.  He said "Aren't they the same thing?"  He was 62 and had been working with a few studios with a career spanning more than 30 years.  It's also an easy thing to confuse by the terminology, as I asked a media arts professor some questions about each and even he stumbled over it.  I meant no insult.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Aug 02, 2017 Aug 02, 2017

Copy link to clipboard

Copied

I ran a test with a buddy of mine.  He worked at an affiliate broadcaster for 15 years, and thought it would be a great experiment to try.  He's been doing more 3d drafting renders than video production, but he still maintains a lab and rents it out.  He's built and rebuilt his main render servers several times in the last 2 decades.  We took one, with a quad core xeon, 32gbram, several 1tb SSD raid with a 10000rpm HDD raid and a failover backup of both (external casing), along with 4 Nvidia geforce 4gb grafix cards and the internal gfx on the cpu disabled.  We removed everything but an FFMPEG based frameserver, AVISYNTH frameserver, and virtualdub with several FFMPEG and quicktime plugins.  It also has several usb3 inputs that we used to feed different types of inputs all at once, and had his programmer script everything.  The whole process took about a day to set up, and we all have lives so it's been sitting for a couple of weeks.  We grabbed our cameras, plugged in 4 of them via hdmi, component, and one over a wifi connection feeding to a folder.  We scripted the inputs into the frameservers accordingly, turned on eis\ois on all the cameras, then hit record, shook things a bit, let the recordings go for an hour or so with intermittent shakes and pans.  In the end, the files from the cameras were all VFR and horribly so.  The frameserver, however, produced files instantly from the streams that were CFR and perfectly in sync.

The frameserver was a custom built design for speeding up renders of higher end effects, but much the same hardware design that speeds along renders will speed ramp a frameserver.  4 simultaneous camera streams, all rendered to useable files, including one from a wifi stream, whose original stream file was also VFR with several dropouts.

We'll be running another test from a portable video drive over a network next time.  I'd wager that the limiting factor in that test will be the network hardware.  We're still deciding on what hardware to utilize for such an experiment.  There are factors to weigh... ...like whether or not we want to set up several small networks ending at the frame server, or just one network with several endpoint uploads.  I'd like to try both, but time may not avail.  I'm an old network hardware tech, so for me, I can see the bottleneck that will slow things drastically.  I believe we can do both and even a multiconnection balanced network to get an idea of the real speed limitations.

As for cost... ....This unit was about $8k for all software and hardware, but it is super fast.  I've rendered on hours of video on it in master\proxy setups that render out in short order.  A four hour multicamera shoot in 1080p took about an hour, just under really, to run through to dnxhd hq8bit and an h.264 25mbit\s proxy.

Now... ...From what I understand from the programmer, it's possible to run multiple instances of a frameserver app, one for each input, but you'd need to tie each one to a processing pathway.  That means 1 input to each GFX processor, and processor core, with up to 3 or 4 for each ssd, 1 or 2 to a hdd.  With RAID, you can get up to 8 or 9 inputs going to SSD's and 6 on a RAID with HDD's.  You'll need sufficient ram.  for 2 streams 10-12gb will really be minimum, 3 streams needs 16-18gb.  WIth more streams, more ram is necessary to handle some memory overhead.  32gb should be good for 6 -8 compressed streams at 30p 1080 with no problem, and should even handle 60p 1080 with little lag.  The files produced are instantly useable and the servers can be scripted to target specific network locations, or locations handed to it by a variable passed in.  That means it shouldn't matter where the original source is on the network, as long as the location is tracked; and the new files produced are instantly useable anywhere.  All that's required for this to work is a network connection that can initiate the stream.  Most IT departments can handle that simple operation.

Get better gear?  No.  I never said that.  I said you need to understand your capture situation and the science of the type of capture before you start complaining about "support" for it.  Gamers and RENDER CAPTURE people need to use equipment or workflows that function based on a Capture Pull rather than a Capture Push function.  Capture Pull can work many different ways, but the fastest is a frame duplication that doubles a preceding frame to replace a dropped one.  Other methods are blending methods.  For video games, a slower optically based algorithm would work.  For cameras a frame double would be best.  WIth a frameserver dedicated to the capture operation, you get a clean stream quickly.  This is the equivalent of an external renderer plugged in.  Essentially, an external renderer runs a frameserver that passes your video into your chosen app, and you can imediately get to work.

Get better gear?  Dump old gear?  To each their own with that.  I say, keep your old gear, just augment it with a frameserver that prevents the problem from slowing you down.  Alternatively, turn off the EIS or OIS on your camera.  Older models (6-7 years) used OIS to describe a floating barrel style lens and sensor, while EIS described the framedrop style.  Today, they both mean the same thing, framedrop.  It's become popular because it's easy to program, but not as easy to build physical balances.  WIth gaming, frameserver Pull types have overhead, typically requiring you to dedicate an external machine to running the frameserver, while Push type will capture only the frames that exist, are easier to build with a single pipe action, and have less resource drain, but produce problematic output.  There's the tradeoff.

Tradeoff:

You can get quality input for any software by focusing on hardware preparation, like turning off EIS, or using a dedicated frameserver.  For a software tradeoff, you get a slight bump in the length of time needed with an INGEST before you process (Prelude\handbrake\AME). 

Targetting the problem on output requires software that allows the editting of the video (typically cheaper, or buy and keep rather than a cheaper SAAS style subscription), like Vegas, or a Cyberlink editor.  Edit, but at the end, there is now a menu list box that lets you choose the frame rate, make sure the checkbox for a constant rate is on.  If you leave this set to automatic, the problem will still exist, and many streaming sites won't play it back correctly.

To those who say I'm advocating getting new gear... ...Just a dedicated frameserver that can be employed as a scripted or at will device, and can capture any stream coming in, producing a deliverable file\stream on the fly.  It's not all that draining on resources, in fact it strengthens and provides longevity to your current gear.  I'm advocating getting an extra accessory.  I'm also advocating that those who havent studied rendering graphics to video output, or video output from the frame standpoint (where the problem is introduced) actually take a class and try to understand it a little.  It really does help.

"Streaming is better in VFR"... ...That's a misconception or a misstatement.  Streaming a source for editing might be easier on bandwidth in VFR, but not on playback.  The retiming processing has to happen somewhere in the pathway.  Since a common low quality fix is to drop audio samples, that's what you'd be stuck with in most instances.  Anybody who's ever taken a course in editing will tell you that audio applies emotional context.  For gaming, this might be ok; but for mainstream video, this just shows that you are limited.  You want clients or viewers to be immersed in the context as well as the visual.  After all, you could put up a video of a man kissing a child, but if the audio is wrong, you could turn a loving father into a pedophile.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Aug 23, 2017 Aug 23, 2017

Copy link to clipboard

Copied

SO... ...Our tests went a bit crazy, but they worked out.  It took some setup, but we were able to pass the video into various devices, then send the file to a frameserver for linking.  We did have to use an older version of Premiere.  CS4 has an available plugin for avisynth on windows that allowed us to frameserve directly into premiere!!  We just had to write a script and edit it to add the filename, then add the AVI wrapper and set it to CFR at the proper rate.  It worked.  The video was pulled in and in sync.  I'll link to the tools, but I also want to test it with the latest PPRO to see if it still works.  There is a version of Avisynth that works on mac os, though the plugin is currently only working on windows.  I'll also be asking the developers if they might try to rebuild the plugin for the mac architecture.  It may be the catch all.  I've still been getting mixed results with interpretting video and applying a frame blend.  In every instance, I've had to set the frame rate, even if it is already detected, and apply a frame blend.  It seems to work best when I simply push the rate to 60fps (rather than a fractional 59.94).  It still fails with some files.  You need at least 12mb\s max bitrate for anything above 720, otherwise it still remains out of sync and the conversion fails to fix the frames.  Also, with longer periods of frame rate drop, it fails.  That only happens with video game captures, and usually with captures done with shadowplay or similar.

Matrox has some equipment that does "Flagged Frame" capture and integrates with most software.  You can use it to record directly into premiere.  For external consoles or a separate computer (even a cheap one) you could capture from several sources, but it will send the flagging to premiere and allow it to create a duplicate or blendframe.  Cheaper input boxes will yield cheaper results, that require more steps in prep.  Matrox gear isn't cheap, but if you want speed, they've got the best for the price.  Blackmagic has a few, but they are a bit more expensive and the system requirements are a bit higher generally.  Black magic has older interfaces that actually function very well with mac computers (they did more Firewire back in the day, and you can still find some of them around).

As far as interface goes, you are better off with tower based PCI cards, or thunderbolt addons.  USB3 has the speed, but it's a cheaper interface that's used by everybody.  Thunderbolt is used by the more serious manufacturers.  Most of these devices use an EFI based interconnect that allows them to pass the video into the software with flagging and a spacing that allows for injection of a fix frame on the fly.

Generally, there is a huge tradeoff.  Your capture device will produce an input file that matches the programming and manufacturing price level.  It takes a lot more pipelining and threading to duplicate frames on the fly than it does to simply mark them when they are dropped.  In cameras, it's rare to automatically create the frame, and expensive where it is found.  However, there's usually a switch to turn off the drop. 

In capturing rendered video, most usb interfaces are limited in their threading of the capture, so even if you do have the speed, the creation of the new frame to fill a drop is a secondary process that the architecture programming doesn't support natively over a single connection.  Some more expensive interfaces dupelicate the connection to allow you to use a second input path and the software switches between them where necessary.  The resource requirements are higher, and the cost difference is between $100 and $300.  With internal drivers on your graphics render chip, you are limited by the rendering engine.  It's usually a better bet to place the game on one card, and use another card with a more comprehensive capture software (one that will fix the rate) to actually do the capture of your screen.  There is a cost to CPU but for more regular work that you do for pay, especially work where your deadline is tight, you want to make sure you have a machine customized for it.  Here's the tradeoff in a nutshell:

More expensive setup = fewer hangups, faster conversions, proper content creation, less time needed.

Less expensive setup = more hangups in between, slower conversions where needed, buggy content creation, a lot more time needed in both workaround creation and in prepping your files.

Note that when I say setup, I'm talking about the physical gear.  Software should generally match your hardware.  I wouldn't normally use Premiere for gaming capture cutting, but that's because I know what I'm working with, and I've prepared for several different workflows.  Most of my game capture doesn't require that much power, even if it is a great GUI.  I can use cyberlink for that, and fix the VFR at the output stage.  It takes a bit longer to export, but again, I've worked with it long enough to edit quickly.  I hate the sound it has in the gui, but I also check my sound outside of cyberlink.  I know what i'm getting.  I call it a lazy workflow, becuase there's less cutting, and it's a bit of a lazy setup.

I've done more than a few events, with multiple cameras, angles, clips, and audio sources.  I always start with either AME to recode, or prelude to preview them and set up a recode.  By the time I even get to Premiere, a lot of my clip work is done.  Its a fuller workflow, but it renders consistent results every time.

That pal I've been using for some of these tests has a matrox input box with hdmi that feeds 1080p with flagged frame input right into premiere, so he just plays cameras back into the software, uses a control board to switch if there are several, and even does some live work with it, which removes the problem completely, while capturing the video to a high quality format, even if the cameras don't record high quality internally.

Your setup and prep will determine much of your workflow.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Aug 25, 2017 Aug 25, 2017

Copy link to clipboard

Copied

OK... I've been on a mission to try and find out what the deal is with VFR from game capture and from other places.  I've found one more work around, but more on that at the end.

Elgato, AMD and Nvidia all got back to me with links to their pages that all say pretty much the same thing.  Elgato gave me several links that were for different equipment, the others only to the latest and greatest, but I swam around in the surf for a while (on their internet pages) and found similar info for older equipment.

ALL of these external boxes say the same thing.  To capture to CFR, your system should be well above the minimum requirement, and be running only the capture.  All of their internal cards say that you should have at least ONE card for capture, but for full frame rate and full hd, you need to set low bitrate, and have TWO or more cards installed in SLI, with the capture running on the second card.

However, I was able to get a gaming laptop running a 1080p game at full 30fps, in CFR, with a usb2 capture set routed through an FFMPEG based capture on a $50 netbook with Atom 1.2ghz, 2gbram, 256mb video card and winxp 32bit.

I've also started a dialog with some guys that developed a plugin for routing an AVISYNTH lead into premiere CS4, but I haven't heard much.  How would this help?  Simple.  It would allow you to feed avisynth a file with VFR, and it would feed a new live file (constantly updating and a bit annoying) or the AVS script which would process the frames in avisynth directly into premiere with the format and data you specify in the AVS script.  It bypasses reading of VFR by reading it from a frameserver and feeding it back through the premiere renderer as if it's a file.  However, it would require a huge pile of RAM to store the frames while you work (only downside).  This gave me an idea, that seems to work very consistently.

Since PPro and AE both feed each other in the following way: APP1renderer>APP1frameserve>APP2renderer

They work pretty much the same as that plugin.  I tried putting some VFR in AE.  Same sync issues... ...darn it... ...wait...  I interpretted footage in AE and set it to blend.  I did it two ways, first without setting the rate, then by setting the rate specific.  THe second method worked a charm.  By setting the frame rate of the interpret footage, it fixed the sync in AE.  I then brought that into Premiere, and it ran fine from a comp.

SO... ...To recap... ...Adobe does support VFR processing, but you have to set the interpret function for the frame rate in order to kick in the fix.  It works consistently in AE and wrapping it through a comp into premiere isn't a one step drop, but it's as close as I can get you.  If it doesn't work for you, get me some particulars and I'll see what I can find that might make it work different:

OS

type of video (Render or camera)

type of camera \video card

filetype

file bitrate

frame rate

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Sep 05, 2017 Sep 05, 2017

Copy link to clipboard

Copied

Ok... ...So a few more tests are in.

After testing several software titles along with Premiere, the difference in how they function is clearly accuracy of placement.  WIth other titles, effects placed in the area where the frame rate changed rapidly were unpredictable as to how long they'd actually go, or where they would start and end; this was less noticeable when setting the frame rate constant on the output, but it was still there, pushing the effects either forward or back by a few frames.  For most of my own projects (camera based), this is unacceptable.  For projects dubbed from a renderer, where I'm trying to effect only the audio or tie in a pic in pic video or the like, it isn't so bad, and the results are acceptable for the subject matter.  With premiere, the effects go right where I place them, whether I have to retime or not (due to VFR).  What I see in the editing is what I get on output.  For camera work, this is just right.  For rendering work, this only comes into play when I have a transition between scenes, and while it feels better, it looks similar, and there isn't much difference other than the feel of the transition.

The difference is in the accuracy.

When it comes to VFR, the only way it has ever worked with any useability is the old VOB packet style, where each time the shot changed frame rate, there was a new VOB with a setting.  This was used with DVD in some cases, but not many, as it wasn't compatible with all players, and had a tendency to skip or bounce over a vob that looked broken to the player.  Later players were able to handle it.

Now for one more possibility...

I've seen a few guys use cs5 and 6 on live files (files coming from cameras or sources that are CFR or have an input actually building the file they are using while they are editing a sequence with that file in it).  The only downside they had was that the screen had to refresh with the file refresh, which interrupted their work every few minutes, but it was a setting they could alter.  So... ...That's essentially frameserving in a nutshell...  Feed frames from a source to a reader that processes and\or plays them.  I also remember a plugin that allowed MSWindows users to use AVISYNTH scripts to feed data into premiere CS4.  I tried to get it to work, but it won't.  It needs the full 32bit architecture and structure of cs4.  Bummer.  However, that doesn't stop you from setting up AVISYNTH to pour to a new AVI file, and then use that file in your premiere app.  Look for a setting that says something similar to file refresh or refresh rate in the prefs and set it to a fitting value.  If your clips are less than 5min, set it to 10 and get an avisynth based queue app that allows you to rip all the clips out that way using alternating threads.  Let it run a few seconds then import the new AVI files back into your app while their being processed.  We tried one.  In our tests the file was locked somehow on win10, we got one of them to work, and we're on it, but in the last tests on win7, it worked.  There was a full window refresh when the file was updated and the refresh kicked in.  Save every few minutes and you're golden.  It did crash at one point but the autosave and our own saves kept almost everything intact even though the file length changed.  This also handled the VFR problem by doubling the frames where needed.  It looked ok for the most part, though I prefer blends, as they are simply a double when between two alike frames and they are a transition when there's sudden motion.  Optical balance doesn't do much with renderings because the background is always changing as you turn your character in a game.

So the best solution is still to ingest video for use with premiere, or use an app that will handle the editing with a seconds accurate playback, but you can get these into premiere and get them to work.

On another note, some pages show that the capture mechanisms built by gfx card manufacturers weren't designed only for playback to show off, but to give developers a way to make adjustments to get the motion to look right (IE they needed actual frame rates for figuring out how much stress on the system and what requirements there are to play the game at different graphics clarity levels).  The hardware dev's are also giving me the "It says it clearly on our website" line and pointing to a page that says roughly:

"In order to lock to a constant frame rate, you must max out your system to at least the recommended requirements for us of only the capture device.  If you have a system intensive game, you'll need even greater resources."

Some even said you'd need the maxed out recommendations with up to 0.75 of your game's recommendation added to that for proper function (not on their own page, but suggested in their forums).

If you are getting a lot of files from outside sources, the only answer for you really is to ingest each video to a standard preset that affixes everything you need it to, from color palette to frame rate.  But that will take a long time with a single preset.  All I can say is that I have a handful of them for different uses.  Some are for apple device supplied video, others for JVC and similar, and still others for gopro or similar (some noise reduction is necessary for cheaper actioncams).  THey all output to H.264 and AVI on win, and h.264 and prores on mac.  I can use them in prelude, or AME.  They work fine.  I also have ingest presets for AME, Prelude and Premiere; and I've counterparts set up for Handbrake.  Say I want to process two sets of videos at once, I just use both handbrake and AME, then leave that unit alone to work.  I do have access to other computers or devices usually, so more of my work can get done all at once.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Oct 08, 2017 Oct 08, 2017

Copy link to clipboard

Copied

Okay... ...sorry to leave this alone so long.  I took on 6 classes this semester, plus 3 other long term projects.  I haven't forgotten this thread, or my mission to try and offer at least some piece of helpful advice.  I did help make a great purchase though, that provided some new information and some ideas.

Since I started checking this thread, I've been in contact with several manufacturers of industry standard equipment.  Their frontline reps had no info (can you say India?  I keep flashing back to pulp fiction and samuel jackson's "ENGLISH *************!!!!  DO YOU SPEAK IT?!!!").  Most did have an upper tier usually dedicated to higher end clientelle, but when I told them why I was trying to get information, with the possibility of sending some business their way, they were very kind.

Upper tiers understood the question, and even pointed out their products and pages for how to get it to work.  Blackmagic and Atomos are the best, with Radeon and Elgato (which has a new name, check it out) just barely behind.  All require a fairly maxed out machine.  Cheaper "Asian" built and sold models had pages that suggest you also "download handbrake to ensure video compatibility".  Many larger names like RADEON and NVIDIA also had pages stating that their ON CARD mechanisms require you to re-encode your video after capture, and request you download Handbrake if you upload to any site other than youtube.

I pooled with some buddies, and we custom built a tower that could probably beat commander DATA (of Star Trek: TNG) at chess... ...Okay maybe not...  But at 26cores of intel, 64gb ram, 3gfx cards at 6gb each, and a main system raid of ssd's to 2tb, we were able to kick serious hindquarters...

We ran some COD4 From STEAM, and FRAPS at the same time, grabbing a 4k clip.  Everything ran fine, perfect speed, no affect on our gaming to run fraps.  We ran shadowplay... ...it did slow the game, and the capture was VFR at every resolution, frame rate, and bitrate.  The difference is definitely in capture style, but we also built this thing to go BEYOND GAMING speeds.  Gaming speeds are short small burst at high speed, while video capture is typically larger longer bursts at high speed.  You need to be set for the latter, which will usually encompass the former.  Of course, gaming machines maximize on how many bursts can happen simultaneously, but you can design for that along with design for larger, longer bursts for video performance.

Better hardware?  Yes and no.  More like hardware built to work in tandem with your workflow and software.  All three have to work in tandem.  Make a plan, test a software or two, and see what fits together the best.  Then buy.  Most of the complaints here... ...you bought before you understood what you were working with... ...we've all done it... ...once for most... ...twice or more for others...  For game capture with FRAPS, here's what I'd suggest for a system:

A tower

12 cores

12-32gbram

at least one video card 5gb+ (2 is better in SLI \ XFIRE)

SSD raid for system (and apps)

That should get you to 2k at 60fps for recording, but you'll have to record at usb3 output or an internal RAID separate from your system drive with high speed to prevent bottlenecking.

"better" hardware?  No.  Just hardware that matches the function you're trying to perform so you don't have to fight your software.

Alternatively, get an external box to record with and use a splitter on your video output to pass the video to both your recorder and your screen.  This will also allow you to put audio sources together in a small, cheap mixer for inputing into the stream, so you can apply a mix of your pc audio on one track and your voice on another, or mix the two together via Right and Left stereo.  There are some cheaper boxes that will output the video data to a card or an ssd you put in the box; these are the better way to go, as they don't use your system resources to record the video, and are designed to capture by polling the buffer for changes, creating CFR video, but be careful how cheap you go.  Some record VFR by default, but are mostly cheaper "Asian knockoffs" of more expensive models from known names.  Generics are always a gamble.  You typically get what you pay for.

I'll make some presets and upload them.  It'll take a while as midterms are comming up.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Nov 08, 2017 Nov 08, 2017

Copy link to clipboard

Copied

I'll be uploading some VFR to an adobe account for a programmer to play with.  They're trying to handle it.  How bout some props for them, eh?  Give a shout out.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Nov 09, 2017 Nov 09, 2017

Copy link to clipboard

Copied

Great news, Harley! Go for it ...

Neil

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Jan 22, 2018 Jan 22, 2018

Copy link to clipboard

Copied

Looks like the "fixed the glitch"! Havent tested it yet, but sounds promising!

New features summary for the January 2018 and October 2017 releases of Adobe Premiere Pro CC.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Jan 22, 2018 Jan 22, 2018

Copy link to clipboard

Copied

Hi rmshro0,

You can now work with variable frame rate video in Premiere Pro CC 2018 (12.0.1). Feel free to download that version from Creative Cloud. More info here: New features summary for the January 2018 and October 2017 releases of Adobe Premiere Pro CC.

 

I apologize that it took so long.

 

Note: if the video streams are too long, or the frame rate varies too widely, you may have to convert them in either Shutter Encoder or Handbrake

 

Need more info? See this article: FAQ: How to work with Variable Frame Rate (VFR) media in Premiere Pro?

Want Premiere Pro to work better with VFR media? Make a feature request! Fix VFR Performance.

 

Thanks,
Kevin

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Jan 23, 2018 Jan 23, 2018

Copy link to clipboard

Copied

it barely works Some clips it wont recognize as VBR and clips that are long still slip. Oh well, at least they are heading in the right direction...

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Jan 23, 2018 Jan 23, 2018

Copy link to clipboard

Copied

schoeps  wrote

it barely works Some clips it wont recognize as VBR and clips that are long still slip. Oh well, at least they are heading in the right direction...

VBR or VFR?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Jan 23, 2018 Jan 23, 2018

Copy link to clipboard

Copied

Sorry, VFR

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Jan 23, 2018 Jan 23, 2018

Copy link to clipboard

Copied

Schoeps,

Sorry, VFR

Now that you have VFR support, this means that you can now transcode and create proxies for clips. Please try that to see if you can salvage these clips.

Please also file any bugs here.

Thanks,
Kevin

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Jan 23, 2018 Jan 23, 2018

Copy link to clipboard

Copied

Please report what type of clips do not work (source, format, specs.) I like screenshots of mediainfo, tree view.

They collected samples from users as part of their work on this, so I'm sure they are interested in what is, and is not, working.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Feb 01, 2018 Feb 01, 2018

Copy link to clipboard

Copied

I am using the Premier Pro CC 2018 (12.0.1) release but when editing a Bandicam recording (VFR) it is seriously out of synch. Trying to manually synch it doesn't work because it is not a consistent "out of synchness".  I was told that changing the settings in Bandicam to CFR should fix this issue.

Sorry, not sure how to find all that media info tree.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Feb 01, 2018 Feb 01, 2018

Copy link to clipboard

Copied

Media Info: http://mediainfo.sourceforge.net/en/Download/

PC only?

One of the view options is "tree."

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines