• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
1

Error Exporting Video

Explorer ,
Sep 08, 2024 Sep 08, 2024

Copy link to clipboard

Copied

Hello Everyone,

I was wondering if anyone had advice for how to go about addressing this issue I'm having while exporting a project. I'm working on a vlog that's 23 minutes long, and I've tried exporting it eight times now and it's failed every time. The first four times produced mp4 files, but strangely, my media players were not able to open them, with an error message saying I may not have the correct codecs installed. This can't be the case however, as when exporting just the first few seconds of the video as an mp4 it played just fine, so there's just a specific point that keeps initiating the crash. The problem is that when it crashes, I can't think of any ways to determine at what point in the video is causing the crash. Because the files can't be read and have no duration, I can't see where the crash is consistently stopping. I have always had issues with exporting literally anything in Premiere, I don't know if I'm doing something wrong, if it's my hardware, or just Premiere being Premiere, but I would greatly appreciate some advice on not just solving this problem, but general best practice for exporting in general, as I clearly struggle with it. 

 

Thank you in advance!

TOPICS
Crash , Error or problem , Export

Views

237

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Sep 08, 2024 Sep 08, 2024

Copy link to clipboard

Copied

I'd look for obvious errors or glitches in the video. You can also try running yhe original video through the app Handbrake to see if it will fix the problem.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Sep 08, 2024 Sep 08, 2024

Copy link to clipboard

Copied

What's the original media, and created by ... what? All the details, and a screengrab of the data in a MediaInfo "Tree View", dragged and dropped onto your reply box, so we see the pic directly, is about the easiest way to do that.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Sep 09, 2024 Sep 09, 2024

Copy link to clipboard

Copied

I'm not sure if I'm understanding/doing this correctly, so please let me know if you need any more information! I've attached a screenshot of what appears in mediainfo when I upload my most recent attempted export, and I've also attached a text file for the mediainfo display for a clip of the original media. Thanks so much!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Sep 09, 2024 Sep 09, 2024

Copy link to clipboard

Copied

Ok thanks ... so that's XAVC, a long-GOP format, thankfully CFR (constant frame rate), 1920x1080, 23.97fps, 422/10-bit. Created by a Sony camera? It should play ok.

 

My one issue is that is "full range" ... which Sony allows camera users to select for some stupid marketing reason, which should not EVER be done. Rec.709 8/10 bit media is encoded as YUV (technically, Y-Cb/Cr) ... and "limited range", which is a hold-over from video tape days used in broadcast, where 0-16 and 235-255 data ranges were used to 'housekeeping' data.

 

It sounds like you get more data encoding to full, but you don't. It's only how the same data is written to file. And it messes up use of the file in most NLEs as YUV media is expected to be, and is correctly handled when it is ... limited range. Only the 12 bit RGB files should be full.

 

They do have Lumetri presets that ship with PrPro for full to limited conversion, and I would recommend applying that to the clips like this, in the bins. So it's a Source effect. It would correct the range of your images prior to moving them to the timeline. So your color corrections would perhaps be more correct.

 

That probably is not the issue with the export, though. I'm wondering if your computer is struggling with the long-GOP decoding/encoding issues.

 

Can you do a test? Open MediaEncoder, and select a couple of these clips, and transcode them to say ProRes 422, then import those into Premiere, drop on a timline, and do an export?

 

Also, what export presets have you used, into what format/codecs?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Explorer ,
Sep 11, 2024 Sep 11, 2024

Copy link to clipboard

Copied

Hi Neil, 

Thanks so much for the response! I'm going to be working on the project a bit later today after classes, so I'll have more to report later! In the meantime, I was wondering if you could explain exactly what a lon-GOP format is, and why a computer would be struggling with its encoding/decoding process. I'll make sure to do a test some point today, thank you so much! And yes, all of this was shot on the SONY ZV-E1 in S-LOG 3 as mp4 1080p video

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Sep 11, 2024 Sep 11, 2024

Copy link to clipboard

Copied

LATEST

WHAT IS LONG-GOP VIDEO?

 

This is a very specialized encdoding process in the capture device, used by some cameras, most drones, nearly all phones, and many screen capture applications like OBS. In phones, drones, and cameras, there are typically specialized computer chips that do the encoding of the video data to write to the file.

 

To save space, a two-part process is used.

 

1) Basic Compression

The first part is "simple" compression of a frame. Typically using block data reductions, such as say a group of 4, six or eight or nine ... whatever number is chosen ... is compared. If they are all pretty close in data values, say within a point of a "central" number for that group, I'll create an example of "close to" 24, 15, 42  in R, G, and B values. The device then encodes all of the pixels as that central value.

 

Most of the time, you won't notice the change as is't pretty small-ish, and in small sections. BUT ... if it is in a sky or wall, without other texture, you may well notice as a visible pattern especially when expanding contrast, increasing saturation, or 'lifting' an area of the image to a brighter view. 

 

At that point, you can start seeing 'banding' ... visual lines of brighter/darker bits across the sky or a wall, or 'macro-blocking', where you see small, odd shaped blocks of identical values.

 

Why do they use this type of compression?

 

Because in the next step, the creation of the 'long-GOP' structure ... the above process reduces the amount of data needing to be stored. And it's a truly massive data reduction.

 

2) The long-GOP format structure

 

"GOP" is "group of pictures". The chip in the device selects groups of images that seem to have the same basic data in the same places. And it creates a few actual frames to record to file, called iframes. These will be from 9 to more than a hundred "frames of video" apart. And are only compressed as in step 1 above. 

 

It's what happens in-between those iframes that both gives this format it's incredible data compression rates, and ... causes such problems in NLE use ... non linear editor programs, like PrPro and Resolve and Avid. 

 

Data-sets versus actual image frames

 

You can only compress an image so far, before you start losing noticeable detail. BUT ... if you simply throw away all 'repeated' data in a video frame, you can save massive space. And I do mean to literally throw away data. To save massive space.

 

Take a group of pictures ... maybe twenty frames ... the camera and scene are pretty stationary, there's not much movement, and that movement that does happen is predictable. Great! This is an ideal candidate.

 

The chip analyzes the first image frame against the following group, and looks for the pixels that pretty much stay the same. In all the following images, those are simply thrown away. Yes ... they no longer exist in the written file! As now, all that's needed for that "frame" ... is a small data-set!

 

But it does record three types of data-sets, that list what does change ... in three ways.

 

1) pixels that have changed since the last iframe;

2) pixels that will change before the next iframe, and

3) both.

 

So you have a file that includes an actual, highly compressed image frame only every 9-100 'frames' of the clip. But in-between are only charts of data of the three types listed above ... no actual images. So your long-GOP video file only has a few images in it. Very few, in fact. It's mostly simple data-sets.

 

And yes, that is the entire reason for the file size differences between a lightly compressed ProRes and your typical H.264/5 "long-GOP" file of the same scene. A long-GOP codec has very few frames stored to file. But in ProRes, or any other "intraframe" codec, every frame is a complete, if compressed, image. All frames of the video are complete in and of themselves.

 

And the chips that are created to do this long-GOP encoding process, in cameras or devices, are fast at performing those computations, and then writing the small amount of resultant data to the card or disc.

 

Playing back long-GOP video files

 

The 'normal' playback of such video files does take some computing horsepower, though for "basic" playback of one file, not an incredible amount. What's the process?

 

The computer starts with decompressing the first iframe ... then looks for the p and b frames that are the data-sets of information between that iframe and the next iframe. It checks to see if it has data-sets referencing the next iframe in that group, and if so, decompresses that next iframe also.

 

Both those iframes are stored to RAM or cache, and now ... it starts going through the data-sets of changed pixels, and recreating the frames by comparing the iframes to the data-sets. And storing each frame as it is recreated to RAM/cache.

 

Now ... it has some frames of video, and can start playback. But that computation process has to continue all the way through playing back that one file.

 

What about an NLE versus a "simple" video player?

 

It is often noted that video players like VLC don't struggle with playing back a long-GOP file. So why does Premiere, Resolve, or Avid, far more capable programs, struggle at times with long-GOP media?

 

It's very simple. And in two parts. First, video players are themselves very simple applications, with very low demands on computer resources.

 

But second ... they are only playing one video file. An NLE, playing back a sequence, is grabbing bits and pieces from a whole bunch of files of perhaps different formats/codecs, framesizes, framerates, adding effects, and playing that, which is now a very heavy computational load, on your screen. So in basic operational use, an NLE requires a ton more computer power.

 

A real look at long-GOP playback in an NLE

 

Ok ... start with the bit above about the computer having to re-create the vast majority of 'frames' you see on the screen by comparison of random but 'real' frames and intervening data-sets. Remember, those in-between images are being fabricated, and stored somewhere for access, from data-sets ... on the fly, and in real time!!!! ... now, let's further complicate things.

 

Let's grab a few, selected frames, from this clip, throw some sizing and scaling and color/tonal changes on them, then grab a few frames from this other clip, but only do some color changes, and go on down the sequence this way.

 

But remember ... the computer has to use RAM, cache, or other resources to store the frames it has to continually fabricate (decode), from data sets and comparative analysis, besides doing all the work of the NLE, the grabbing bits, smooshing them together, and adding effects.

 

Yea, that's a load.

 

And some devices now use partial iframes! ... in other words, they keep part of a video image, and do the rest of that partial iframe via data-sets linked to other iframes!

 

Which means for some drones and cameras, the computer has to find up to over 100 frames, comb through for iframes, partial iframes, and data-sets, and fabricate that entire group of images before the first frame can be played back.

 

It's almost insane what we expect computers to do these days, and that they do it at all is amazing.

 

But ... if ... your computer has the right hardware bits to handle the decoding of long-GOP files as a 'hardware process', rather than by "manual calculations" doing the fabrication work, it can be ... not too bad.

 

Until you start throwing too many other effects on top of the decoding work.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines