Copy link to clipboard
Copied
Hi Everybody,
I'm writing a Mac AE plugin, and I'm trying to convert a 16-bit EffectWorld into an image that can be used by the Mac OS APIs. While doing this, I noticed a peculiar thing about 16-bit EffectWorlds: The RGB components of each pixel color range in value from 0-32768, rather than 0-65535 like I was expecting. They behave more like 15-bit colors rather than 16-bit.
For example, solid red is represented by these color components:
• Red = 32768
• Green = 0
• Blue = 0
Does anyone know why this is? It just seems like an odd choice, and it gives AE fewer colors to work with.
It's true! This is Adobe 16-bit, which maxxes out at 0x8000 instead of 0xffff, so it's more like 15-bit plus 1. The reason they did this is so you use x >> 15 to divide by PF_MAX_CHAN16. Nobody else does this, so you'll have to convert to real 16-bit to work with other libraries. I do it this way:
#define PF_MAX_CHAN16 32768
#define PF_HALF_CHAN16 16384
static inline uint16_t Promote(uint16_t val)
{
return (val > PF_HALF_CHAN16 ? ( (val - 1) << 1 ) + 1 : val << 1);
}
static inline uint16_
...
Copy link to clipboard
Copied
It's true! This is Adobe 16-bit, which maxxes out at 0x8000 instead of 0xffff, so it's more like 15-bit plus 1. The reason they did this is so you use x >> 15 to divide by PF_MAX_CHAN16. Nobody else does this, so you'll have to convert to real 16-bit to work with other libraries. I do it this way:
#define PF_MAX_CHAN16 32768
#define PF_HALF_CHAN16 16384
static inline uint16_t Promote(uint16_t val)
{
return (val > PF_HALF_CHAN16 ? ( (val - 1) << 1 ) + 1 : val << 1);
}
static inline uint16_t Demote(uint16_t val)
{
return (val > PF_MAX_CHAN16 ? ( (val - 1) >> 1 ) + 1 : val >> 1);
}
Copy link to clipboard
Copied
Thanks for the quick reply. That's exactly what I needed. 🙂