Copy link to clipboard
Copied
I was comparing 24-bit and 48-bit depth for archival scanning, when I came across this bug.
Technical info/version:
Adobe Photoshop Version: 23.5.1 20220907.r.724 5600b96 x64
Betriebssystem: Windows 10 64-Bit
Version: 10 10.0.19044.2130
ImageMagick-7.1.0-Q16-HDRI
How to reproduce:
Repeating the same procedure for my 24-bit scans results in black comparison frame, showing zero difference.
I can not insert the source files into this post, because the scans are bigger than 47 MB. If external links are fine, I can upload them somewhere else.
Copy link to clipboard
Copied
External links work fine for sharing files. Post a URL to the file using CC files or dropbox, or something similar to share the file.
Copy link to clipboard
Copied
I did some more testing and came up with an easier way to reproduce the issue, with smaller files as well.
Test files and comparisons are attached. As I initially pointed out, this issue also occurs for output files of scanning software and is not expected behavior.
I cannot attach tif-files, so I included the png.
Copy link to clipboard
Copied
To illustrate the issue, this is the base image made in Gimp:
 Executing the comparison to the Photoshop-save should give a black frame for identical pixel data. These are the acutal results for png and tif:
  
Copy link to clipboard
Copied
Do you understand what Photoshop 16bit is?
Copy link to clipboard
Copied
16-bit color channels, which are typically summed up to 48-bit in other softwares. What is the point of your question?
Copy link to clipboard
Copied
Thank you very much for the additional link. I did not know about this discreppancy.
So Photoshop is not using actual 16 bits per channel, while other software are?
Copy link to clipboard
Copied
Photoshop does use 16bit but not the way you expected.
15bit + 1 cannot be expressed in 15bit – that’s obviously impossible.
So it is 16bit, but the available values are not 15bit x 2.
Copy link to clipboard
Copied
That is some very valuable insight. My problem is, that when I receive a lossless file and export it losslessly I do expect that Photoshop could be able to not lose any data.
So far I have seen scanning software and Gimp do it differently. When I receive a file, I cannot always know where it comes from, but the expectation is, that I will not unnessecarily alter the files. I initially discovered this while attaching ICC-profiles to archival scans. This is a use case where I especially do not expect or want Photoshop to alter anything at all.
Copy link to clipboard
Copied
You have to decide for yourself it the »missing« 32787 values matter that much for these use cases.
But if this is about photographic images you may want to overlay the two images (in Gimp for example, as that apparently works with »15bitx2«) and try to discern a difference (for starters without the help of Adjustments).
Copy link to clipboard
Copied
As explained above Photoshop does not use the full 16 bit data, but for photographic purposes the difference is academic and the delta to each pixel is so small it cannot be seen by eye. It was explained some time ago that using 15 bit +1 was built into the core processing for speed and to give a middle value which works better for certain image processing. For viewing images it does not result in a visible difference.
There are some specific use cases where it does matter, for example 3D height maps or specific scientific purposes. For those, either using other software or converting elsewhere to 32 bit float and opening in Photoshop are appropriate.
Dave
Copy link to clipboard
Copied
Would it be possible to notify the user of this change when saving or opening an image, before the data is lost? I support the notion that this is not for visual fidelity, but I had no way of knowing this without getting the information on the forum. It is unexpected behavior.
Is the performance boost of the 15 bit +1 still needed today? I do not know if Adobe would be touching the core, but carying this along might not be necessary anymore and providing regular 16 bit-processing would give a more consistent user experience with standardized formats.
Copy link to clipboard
Copied
I don't work for, therefore cannot speak for Adobe.
I have no idea how much work would be involved in rewriting such a core function of what is now a huge application, along with any other associated functionality that uses and depends on that 15 bit+1 processing. I suspect a lot more than we would think given that Photoshop has used this method since 16 bit/channel support was introduced in 1993 (version 2.5 ).
Dave
Copy link to clipboard
Copied
It is unexpected behavior.
That seems correct.
Maybe you should post a Feature Request on the isssue (but do a search first, maybe one already exists that you could add your vote and comment to).
Copy link to clipboard
Copied
While the cost of doing this would be significant (if not massive) - the practical user benefit would be precisely zero, except in the special circumstances Dave mentions.
That equation seems to have a pretty robust outcome, unless some kind of game-changer happens.
Copy link to clipboard
Copied
The cost would be massive; all functions that work with 16 bit would need rewriting/retesting, and affect all third party filters that work in 16 bit - because they all expect the current image format. Also the huge body of existing 15+1 bit files would no longer be processed losslessly. There would now be two kinds of 16-bit PSD, both of which must be supported forever. Realistically, to avoid massive disruption it would mean adding a new image subformat alongside the current one - so the user now has to understand why there are two 16-bit formats under Image > Mode.
Still. A warning when opening true 16 bit data might be in order / but consider,nowhere does PS warn even against the loss from saving JPEG, which is much more significant in photographic editing, and affects countlessless people; they are simply expected to know. It doesn't warn that converting to a new colour space and back will introduce posterisation. And so on. And the processing cost? Outside my expertise but I read a statement that because image processing is now much in GPU, which don't do floating point, that the priocessing cost of true 16-bit could be much higher - so slower display and processing for everyone.
Copy link to clipboard
Copied
Yep. I think that sums it up well. Not going to happen, unless there's an earthquake.