• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
1

Tone mapping HDR PQ (Canon)?

Community Beginner ,
Jan 01, 2024 Jan 01, 2024

Copy link to clipboard

Copied

Hi, 
Premiere explains that it automatically tone-maps HDR hlg--by just clicking that you want to use the rec709 space when you put it in the timeline--but it doesn't mention HDR PQ. Does it recognize that as easily? Any suggestions otherwise (besides complaints about Canon 😉 )?

TOPICS
Formats

Views

322

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jan 01, 2024 Jan 01, 2024

Copy link to clipboard

Copied

It's a host of settings to get things working correctly.

 

In the Color Workspace, Lumetri panel, Settings tab ... set auto-detect log to on. In the Sequence tab, auto-tonemapping to on, and select a Rec.709 sequence.

 

Set the Display gamma to your desired setting, for most non-Mac it will be broadcast 2.4 or maybe web 2.2 if you want.

 

Now drop a clip on the sequence, and see what it does. Post back. I've tested about everything out there from Red/Arri through Fuji, but not much Canon. Would like to see your results.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Jan 12, 2024 Jan 12, 2024

Copy link to clipboard

Copied

Thanks, I'll give it a try soon. I've also been building/tweaking LUTs on LUTCalc for this purpose, so keen to see how they compare to the auto-tonemaping.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jan 12, 2024 Jan 12, 2024

Copy link to clipboard

Copied

You probably know this, but many coming to this forum won't. And its an incredibly important thing when using a LUT based workflow.

 

IF ... you know what you're doing, why, and most especially, how ... and you know how to properly test a LUT for when and what it breaks (which it will to something at some point) ... so you know the exact use and limits of that LUT ... then making LUTs can be a very useful and (mostly) safe tool. 

 

They can be used in all kings of things. But ... even a LUTCalc created LUT, is still in all, "dumb math". Simple "X becomes Y" lookup tables. Even the 3D LUTs ... those just have different sets for different channels. There's no if-then/not-this type of calculations in a LUT. 

 

However, the crucial difference in tonemapping, is that the aglorithmic math process used specifically allows for if-then/not-this-but-that sections. As many as are wanted, within the complex mathematical work the algorithm can do. Entire libraries of if-then sections can be included, and even added later, to the process.

 

So in general, the algorithms used in tonemapping are far safer than LUTs. For the vast majority of media run through them, they simply cannot "break" the image.

 

For example, they will not push anything to a clip or crush out-of-bounds, unrecoverable point. Which is always a consideration and concern with any LUT built by anyone for any use.

 

Algorithms can easily take into account varying field exposures without problems ... which LUTs simply cannot. You always need to be able to 'trim' your field exposures into and through any log to linear "normalization" LUT,  to modify the clip to match that which the LUT was built for. That is the standard colorist training and working practice for using LUT based normalization workflows.

 

Which you do not need to do with algorithmic tonemapping.

 

Which is why most colorists prefer algorithms for OETF and IDT (input tranform porcesses) use, on inputs in say Resolve or Baselight any time there's a good one available. And there are for so many things in Resolve and Baselight.

 

In general testing with a wide range of media ...  the Adobe built algorithms in Premiere are pretty decent. They do a very good job of safely migrating your wider space pixels to linear Rec.709. Or to HLG or PQ if that's your sequence. No data damage, no clipping, no crushing, no out-of-bounds color.

 

Like all linearization routines, they do vary in visual results slightly from anything else you may have used. That is of course expected, normal, and for many, good. There being no "exact and ONLY!" linearization process for any media, period.

 

All linearization routines have choices built in, including aesthetic ones, which will work 'better' for some users and workflows than others. Which is why colorists test and use a bunch of them as they are needed.

 

There are still some media type outliers, that don't yet work great, when run through Adobe's new-ish algorithms. They do well with most of the Red, Arri, and "heavy" camera Sony S-log I've run through them. Much of the Canon, too.

 

But some of the Sony and Canon "prosumer" cams do have intriguing options they allow the user to do, that don't work so well with the algorithms at this time.

 

Like ... encode a YUV file (Y-Cb/Cr) to full rather than legal (never ever do that!)  ... which can cause problems. It doesn't 'help' the data any, it only changes how the identical data is encoded in the file.

 

And no software really expects, nor should be given, full-range YUV media. It does not add one level to the displayed image, period ... it just mucks up 'normal' processing for a marketing gimmick. Although you can change the full to legal option in Resolve, you can't in Premiere at this time. And that's a silly, avoidable mess for a number of users.

 

So ... for the vast majority of users and uses, the Adobe algorithms are a better starting point than most LUTs would be. Note, that's starting poing. Like LUTs, no normalization routine is expected to be the only tonal/color correction for the finished image, just a process to reliably get your data into the linear space of your software's controls. 

 

You will need to treat the media, after the tonemapping is applied, a bit differently than with any LUT you have used. But then, you have to do it differently with any one LUT than another already, right? So you develop a new process. Big whoop.

 

I don't use LUTs in Premiere (or Resolve) for most things anymore. The algorithms are simply faster and easier. And safer.

 

But as always, test before working with professional projects.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Jan 14, 2024 Jan 14, 2024

Copy link to clipboard

Copied

This is great info. Thanks so much for taking the time to clarify the difference! I think I'm going to shoot my HDR PQ with a quite general 2100 to 709 LUT loaded on the camera monitor to get me in the ballpark, then use the tone-mapping algorithm and tweak from there.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jan 14, 2024 Jan 14, 2024

Copy link to clipboard

Copied

LATEST

That's probably a best practice for now ... 

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines