New GLator 16bpc oddity
Copy link to clipboard
Copied
Hi. The 2017 GLator multiplies 16bpc by 1.99999 before doing processing then divides it by the same at the end. I'm confused why it does this, as 8bpc doesn't require any conversion. Thanks.
Copy link to clipboard
Copied
honest to god, i have no idea... i'm not even familiar with the problem...
On Mon, Nov 26, 2018 at 5:59 AM JamesWhiffin555 <forums_noreply@adobe.com>
Copy link to clipboard
Copied
The code is:
// in case of 16 bits, convert 32768->65535
colourOut = colourOut * multiplier16bit;
And at the end:
// finally handle 16 bits conversion (if applicable)
// in case of 16 bits, convert 65535->32768
colourOut = colourOut / multiplier16bit;
I guess you get double the precision but it's strange it's not done for 8bpc.
Copy link to clipboard
Copied
Hi guys!
OpenGL 16 bits is a bit weird... Or at least doesn't work like AE's 16 bits. So in the Glator sample, the 16 bits texture is just converted to "floats" (not really, cos' you can't have overbrights...) and then converted back to 16 bits when everything's done.
From what I can remember, texture is written as 'half-floats'.
That's why there's the 1.99999 multiplication, so it stays inside the 0 - 32768 range.
Cheers,
François
Copy link to clipboard
Copied
Thanks Francois. Sorry I don't fully get it, we have 0 - 32,768 16bpc in AE, and then the first shader makes it 0 - 65,535... why is that? It converts back at the end but I don't understand why it does the conversion in the first place.
Copy link to clipboard
Copied
Oh, you're right, I made a mistake. It multiplies to stay in the range 0 - 65535, not 0 - 32768...
It is the accepted range with OpenGL, like described here: graphics - What is the correct way to sample a 16-bit height map in OpenGL - Stack Overflow
But I must say I gave up on 16 bits... I just convert my 16 bits textures to 32 bits. The web is full of 16 bits related issues, and I don't understand a half of what they say 😉

