Skip to main content
Inspiring
November 26, 2018
Question

New GLator 16bpc oddity

  • November 26, 2018
  • 1 reply
  • 899 views

Hi. The 2017 GLator multiplies 16bpc by 1.99999 before doing processing then divides it by the same at the end. I'm confused why it does this, as 8bpc doesn't require any conversion. Thanks.

This topic has been closed for replies.

1 reply

Community Expert
November 26, 2018

honest to god, i have no idea... i'm not even familiar with the problem...

On Mon, Nov 26, 2018 at 5:59 AM JamesWhiffin555 <forums_noreply@adobe.com>

Inspiring
November 26, 2018

The code is:

// in case of 16 bits, convert 32768->65535

colourOut = colourOut * multiplier16bit;

And at the end:

// finally handle 16 bits conversion (if applicable)

// in case of 16 bits, convert 65535->32768

colourOut = colourOut / multiplier16bit;

I guess you get double the precision but it's strange it's not done for 8bpc.

françois leroy
Inspiring
November 26, 2018

Hi guys!

OpenGL 16 bits is a bit weird... Or at least doesn't work like AE's 16 bits. So in the Glator sample, the 16 bits texture is just converted to "floats" (not really, cos' you can't have overbrights...) and then converted back to 16 bits when everything's done.

From what I can remember, texture is written as 'half-floats'.

That's why there's the 1.99999 multiplication, so it stays inside the 0 - 32768 range.

Cheers,

François