Skip to main content
Participating Frequently
February 16, 2011
Answered

16-bit or 15-bit+1?

  • February 16, 2011
  • 2 replies
  • 9230 views

I've been searching high and low to get an understanding why Photoshop CS5 (and CS4) only show a range of 0-32,768 for the color picker on a 16-bit per channel image, when in fact for true 16-bit should be showing 0-65,535. And I've not been particularly successful in my search - all I can find is that yes, this is true, but not why, nor why it hasn't been fixed (I understand it's been this way since PS started supporting 16-bits per channel.

The other thing I can't figure out is if this is something that's only in the color picker, or does Photoshop take 16-bpc (Bits Per Channel) images and sample them down to this 15-bit+1 range (effectively reducing the channel range by half) and the color picker just reads what the data has been munged down to?

And yes, before someone asks, there are cameras and backs that produce true 16-bpc images (I have one - a Hasselblad).

Any pointers to documentation or more detailed discussion of this issue would be greatly appreciated, as calling 0-32768 "16 bits" is rather misleading. I'm feeling cheated somehow.

Thanks,

Jake

    This topic has been closed for replies.
    Correct answer Chris Cox

    Photoshop's internel representation is 0..32768. This gives a midpoint to the range (very useful for blending), and allows for faster math because we can use bit shifts instead of divides.

    That is not a bug, just a design decision to make 16 bit/channel run faster.

    A search of the forums will reveal more information about that decision.

    2 replies

    Participating Frequently
    February 16, 2011

    BonaireGuy wrote:

    And yes, before someone asks, there are cameras and backs that produce true 16-bpc images (I have one - a Hasselblad).

    Actually, while many of the medium format backs and some scanners claim a full 16 bit, in fact that depends entirely what your definition is of "a full 16 bit" file. In point of fact, the difference between a full 16 bit (and I seriously doubt the "full 16 bit claim) and 15+1 is negligible. You would never see the difference (and Photoshop would prolly be a a different animal in it's processing routines).

    Bottom line? Don't worry about it.

    You would be far better off learning how to optimize your capture exposure than worrying about less than 16 bit precision...

    Chris Cox
    Chris CoxCorrect answer
    Legend
    February 16, 2011

    Photoshop's internel representation is 0..32768. This gives a midpoint to the range (very useful for blending), and allows for faster math because we can use bit shifts instead of divides.

    That is not a bug, just a design decision to make 16 bit/channel run faster.

    A search of the forums will reveal more information about that decision.

    Participating Frequently
    February 16, 2011

    Thanks for the official insight.

    So, if I bring in a 16-bpc file, and then save it, will it still retain all the original data, or will it have had the channel information compressed from 0-65535 to 0-32768 (and then presumable expanded back to 0-65535 so that other programs read it correctly)? Or is the 0-32768 some sort of abstract intermediate representation that doesn't affect the original data (perhaps until later)?

    Just trying to understand where my bits are going, if anywhere.

    Thanks,

    Jake

    PS In this day of GPUs with parallel processing, are divides really still that expensive in terms of processing time?

    Chris Cox
    Legend
    February 16, 2011

    Once inside Photoshop, the data is converted to the 0..32768 range.  Yes, resaving will alter you data a little, assuming you had full 16 bit data to start with (and except for cooled scientific cameras, you won't have full 16 bits).

    PS In this day of GPUs with parallel processing, are divides really still that expensive in terms of processing time?

    Yes. GPUs are good at a few things, but still kind of lousy at general processing.

    Current processors take 30 to 90 clocks to divide (single throughput), and a single clock (1 to 4 throughput) to shift.

    So that's anywhere from 30 to 360 times faster to use the shift, just from a computation standpoint.  Add in the fact that divides tend to stall the CPU pipeline, and it gets worse. Dividing by a constant can be replaced with a reciprocal multiplication, but that needs higher precision than the base type.  So if the compiler does that optimization, then you're talking at least 20 times as slow as the shift (but still faster than the full divide).

    Exact performance differences will vary with the processor model, but that should explain the basic picture.