Copy link to clipboard
Copied
What are the benefits of working in the Photoshop 32-bit space on either Raw Smart Object Layers, or even taking a Raw image from Lightroom/Camera Raw into Photoshop the standard non "Smart Object" way (into 16-bit) then immediately going into the 32-bit space (Image/Mode/32-bit)?
I have been doing this for many years now (generally taking images as far as possible in the 32-bit space, doing a "save as" so I can go back to the saved 32-bit space layers if I ever need to. Then once I have done all I can there, I bring the photo back into the 16-bit space after getting all I can out of the adjustments available in the 32-bit space.
I believe I see some real world, noticeable benefits. Of course this workflow comes with many quirky things that the 32-bit space entails.
One (of many) things I believe I am seeing is the ability to stretch tones further, and use/apply more radical adjustments that would result in posterization/banding (more damage) in the 16-bit space.
One example: a client recently sent me a black and white image that did not have great data (not shot bright enough) and although most of the work was done in Raw/Lightroom, once it was brought into the 16-bit space and several edits (some of the edits using masks) were applied to optimize it, banding/posterization occurred fairly quickly. I have seen this happen for years.
BUT, I then tried bringing the raw file from Lightroom into Photoshop as a raw "Smart Object", but this time I immediately converted it into the 32-bit space. Then once it was in the 32-bit space I rasterized it (flatten) and proceeded to do the same masking/adjustments. No banding/posterization occurred. After finishing the edits in 32-bit mode (and saving a copy there in case I ever need to go back to it) I brought the finished/flattened image back into the 16-bit space and there was no banding/posterization and the tones just seemed better overall. We made a huge print out of it, and the tones were noticeably improved. I have come to believe that the masks are higher quality in 32-bit as well.
Lastly, one other example (of many) I have also brought damaged banding rasterized images (that had a lot of abusive adjustments applied) including jpeg's that nothing in 8-bit or 16-bit could fix (only make it worse).
But then, instead, I brought the images straight into the 32-bit space, made a very slight Gaussian Blur adjustment to the sky (the banding area) and then converted it back to 16-bit and the banding was gone! In this latter example, there seems to be some kind of beneficial redistribution of the data happening. In fact, I recall an image that banding disappeared by only going into 32-bit, then to 16-bit without any adjustments.
All thoughts are welcome. Thank you!
Copy link to clipboard
Copied
I really can't see how you can get banding in 16 bit files, even with extreme adjustments. It shouldn't even be theoretically possible. Something else must be going on if you see that.
First of all, keep in mind that your display pipeline is 8 bit. That's where the banding you see happens - even more so if you have calibration tables loaded into the video card or a bad monitor profile. Theses can all pile on top of each other.
Secondly, Photoshop previews at less than 66.7 % zoom are always rendered at 8 bit depth. Try again at 100%.
Masks are the same bit depth as the document. However, selections are restricted to 8 bit depth, so a selection directly turned into a mask is 8 bit (until you start working on it).
And jpeg, of course, only ever supports 8 bit depth.
Copy link to clipboard
Copied
Thank you for taking the time.
I completely understand where you are coming from, and understand all the above information and I teach it. But the banding and posterization is a well known thing doing print post-processing for a living for top end photo galleries full time over the last 15-20 years. Many clients do incredible amounts of adjusting to their files. All kinds of selections, and masks on masks on masks on masks and more. In black and whites it can happen even easier when people are doing a lot of major tonal stretching, masking, bluring, noise reduction, more stretching and so forth. So, I actually do not want to debate or defend that part of my post above.
I'm really wanting to see what kind of answers some might have in regards to the original question:
What are the benefits of working in the Photoshop 32-bit space on either Raw Smart Object Layers, or even taking a Raw image from Lightroom/Camera Raw into Photoshop the standard non "Smart Object" way (into 16-bit) then immediately going into the 32-bit space (Image/Mode/32-bit)?
Thank you.
Copy link to clipboard
Copied
"What are the benefits of working in the Photoshop 32-bit space on either Raw Smart Object Layers"
For your workflow - none at all. Starting on 16 bit then moving to 32 bit merely maps existing values into the floating point space and changes to linear gamma. 32 bit floating point linear is for HDR work which might be combining multiple exposures or 3D renders at high dynamic range. It therefore handles white and black levels way beyond those that can be displayed on monitors (hence the need for tone mapping). It has though major disadvantages in that the tools available in that mode are severely restricted and you cannot actually see the full range of tones or adjustments that you are making (you have to use the slider to select the visible parts of the range).
I sometimes use 32 bit output from 3D renders, which by their nature have extended white and black points, but I quickly tone map to 16 bit for further adjustment and processing. 16 bit is more than capable of handling extreme manipulation and gives access to all tools and most filters without any banding or posterisation.
Dave
Copy link to clipboard
Copied
Thank you for taking the time to help.
So in a nutshell you believe there are no benefits at all. I believe I have been seeing many benefits for years of full time processing work for my living (I have described a couple of examples in my post).
So, ACR, or a Raw image brought into Photoshop as a Raw Smart Object is processed in 16-bit interger?
And converting it into the 32-bit space, then using 32-bit masks on only other Raw images, brought into the stack (no rasterizing) or multi Raw versions of the Raw file (new smart object via a copy - then Camera Raw adjustment and then 32-bit masking) with no rasterizing... has no potential quality benefits or increased processing precision?
Using Raw on Raw with 32-bit masks can do nothing improved than 16-bit?
I'm convinced I see differences all the time. I'm going to wait and see if anyone else here might be able to tell me why?
Copy link to clipboard
Copied
I'm convinced I see differences all the time.
By @Camera Raw Fan
I know I'm stepping on a hand grenade when I'm saying this, so please don't take this the wrong way: Placebo can be quite powerful. This is not intended as a dismissal of what you're saying, but there are very good and solid arguments against it.
There's just no way you can actually see any difference, whether on screen or in print. Yes, there's a purely theoretical advantage, but on the same level as taking a Ferrari and fitting it with a rocket engine. It won't reach the speed of sound, it will fall apart long before. It's not a balanced system.
You say you see banding. That's in your display system, or in the print pipeline. You're not getting any banding that the eye can perceive with 14/15/16 bit data.
You might get banding in a particularly buggy camera profile in Lightroom/ACR. That has happened, but very rarely (and quickly fixed by the ACR engineers).
32 bit is for HDR, and the main purpose is to contain data above or below the white and black points of a normal gamma encoded image. To achieve that, you need to combine bracketed exposures in one HDR file.
I see you've started the same discussion in the Lightroom forum, and basically getting the same answers.
Copy link to clipboard
Copied
Hi
If you are starting with a single exposure then your camera will be capturing that exposure in 14 bit linear (12 bit on older cameras). The dynamic range between the brightest white captured and the darkest black is limited by the sensor. That tonal range can be mapped nicely onto a 16 bits /channel RGB color space with a 1.8/2.2 gamma.
When viewing on screen most monitors will restrict you to viewing 8 bits /channel and some of those actually use 6 bits + dithering. The higher end monitors designed for critical graphics work use 10 bits/channel - provided that the entire chain (image application, video card, and monitor) can all use 10 bit/channel. On these monitors you cannot see banding in gradients so any image bit depth above that 10 bits will be fine visually. 16 bit gives plenty of range for adjustments that will still remain step free visually.
8 bit and 16 bit images have the same black and white points, they just divide the tones between them into more levels. 32 bit floating point images though, extend both black and white so you get parts of the image that are blacker than can be displayed and others that are whiter than can be displayed. To see those images, they have to be tone mapped into the range that can be displayed in the 8/16 bit colour spaces and at that point there is no advantage in staying with 32 bits.
Where 32 bits /channel does score is the combining of multiple exposures from the camera (each using 14 bits raw) to combine several images each with a different part of the scene's dynamic range, into a single 32 bit exposure. This effectively extends the range captured by the camera sensor - hence the term High Dynamic Range images. These images can be used in 3D applications to add realistic lighting to a 3D scene. However, for normal viewing, they still have to be tone mapped into a range that can be displayed or printed both of which can be handled in 16 bit (or 8 bit for final printing).
There is no advantage at all in preparing multiple outputs of the same capture in camera raw and then combining them in 32 bit to form a pseudo HDR image, as the dynamic range of the image is unchanged from the initial 14 bit sensor captured raw image.
Dave
Copy link to clipboard
Copied
Thank you for a great explaination here!
Side note, I use a 10 bit monitor. My goal is NOT to create a HDR image. It is to take advantage of the potential of a higher precision editing space.
Raw (Smart Object) on Raw workflow pushed as far as possible, with what I believe we are seeing as higher precision masking and a much more robust file (once it is finally rasterized to 32-bits) then maybe a few last tweaks made there saved (save as PSB -save as), but then converted/rendered down to 16-bit TIFF for ouput. We have been seeing very notable benefits this way for years and I am still wondering why?
As far as seeing posterization, or degrading including banding in 16 bit files with lots of masks and lots of micro, and or radical destructive adjustments, we see this all the time in file coming into the printing lab. I get these images in their complete layer stacks from so many clients. In fact I can create the damage.
Example: I have a very high end nature photography gallery client (about as high as it gets - I do all the post -rocessing for print preperation) and we were recieving files with banding and posterization in them often, always processed in 16-bits. These are images that need to be world class 10+ foot, 90", 80", 70", 50", 36", 24" and 18" files. Yes, the banding shows up in the file and in the prints.
*In this case, I asked to see the companies full workflow. They agreed and they used GOBS of masks on masks on masks... including LOTS of radial gradients on masks, on many layers and more. Lots of pretty typical microtuning. These and lots of selection based masks (some of them blurred...). I would say pretty typical for many nature photographers today.
After carefully watching and evaluating their entire workflow (from Raw to finish - all PS work done in 16-bit) I had them switch to a Raw (Raw Smart Object) on Raw layer workflow (maximize LR, bring as a Raw Smart Object, double check color settings, go straight into Mode/32-bit, duplicate layers by duplicating the Raw File (new Smart Object Via A Copy) then NO no Adjustment Layers or Smart Object Layers used.
All adjustments done clicking into Camera Raw (as a Raw layer) throw a mask on there and do (localize) the same things/techniques they did before. Same masking techniques. The BIG Difference was using Raw Layers only, and only use what seems clear to me as higher fidelity masking/processing/rendering. I even had them use a Raw Reverse "Texture" (in Camera Raw) layer as their Blur layer. Instead of a typical Gaussian Blur.
Lastly, I had them Rasterize the image into 32-bit rasterized. On some images, maybe a very small tweak or two was then applied to it (Curve, Hue Saturation, Levels) but only if they could not get it done as a Raw layer) save. All in the 32-bit space.
THEN, make a copy of it, and to the copy, render/bring it into the 16-bit mode (usually thruogh the HDR window - Exposure Gamma). NO banding, No posterization, robust pristine data!
Ever since this workflow was adopted, we have never had banding/posterization issues again, from ANY client that has adopted it.
I see this amost daily. So, if there is "no benefit" (according to some) what might be the reason we are seeing a very real and noticable improvement, including (but not limited to) no banding, better smoothness of tones and colors, less noise biuld up, better shadows, no posterization degrading, less artifacts, improved highlight quality and noticable cleaner and far better enlargements?
Something is going on. We have been doing this for years. I'd like to see if anyone might have any ideas as to why?
Copy link to clipboard
Copied
All high bit processing in Photoshop is actually 15+1 bits.
You will never get banding editing a high bit document. You can see banding in them IF you don't have a full, high bit display path. The banding is in the display, not the image data. And if there is banding in the display path, 32-bits does nothing more to aid in this issue.
A full high bit display path invoves the display, the video card and the OS (plus application but Photoshop supports this).
High bit editing is useful for overhead to keep from introducing banding from edits, which can result in 8-bit per color images:
http://digitaldog.net/files/TheHighBitdepthDebate.pdf
Here is a document produced to view and determine if you have a high bit display path. Open it in Photoshop, view at 100% zoom. IF you see banding, it is your display path.
https://www.dropbox.com/s/s14f4w7dq85r7oo/10-bit-test-ramp.zip?dl=0
Copy link to clipboard
Copied
Thank you very much. I have a high bit display path, and the degrading that we see is in the prints.
Copy link to clipboard
Copied
Thank you very much. I have a high bit display path, and the degrading that we see is in the prints.
By @Camera Raw Fan
Then that's a totally different issue and has nothing to do with bit depth of the data or Photoshop. Most print drivers don't even send high bit data to the driver itself. The few that are known to do so (for example, my Epson 3880 on Mac only) produces identical prints (measured) if you send a 16-bit or 8-bit per color document to that printer.
Copy link to clipboard
Copied
Although @D Fosse already mentioned this, I feel it is important to emphasize once more that Photoshop's 16/32bit per channel workflow is severely hampered by selections only being 8-bit. High-bit depth data is immediately destroyed when selections are used by a client to create masks. Basically they are working with 8-bit data in selections that wreck a high-bit depth image more and more as more selections are created and layer masks based on those 8bit selections are introduced.
And people are surprised banding occurs...!
And of course Photoshop's so-called 16 bit mode can only handle half the data of a true 16bit image mode. Photoshop throws away 32767 values in a full-range 16bit HDR image when it is opened. And 32bit conversions to 16bit are of course also affected. (And Photoshop doesn't warn the user about this!!!)
To me this means Photoshop is ENTIRELY unsuitable for high-end 16bit image processing and HDR workflows, and the selection tool should be avoided in either 16bit or 32bit mode (unless it is unimportant to retain all image data, but then you ought to ask yourself why are you working in high bit depth mode...).
In simple terms: 32bit to 16bit conversions should be done outside of Photoshop, in an image editor that actually uses the full range of values - a real 16bit mode, rather than Photoshop's 15bit mode. That is, if you care about maintaining the full range of 65535 values per channel.
And then there is the unfortunate Photoshop "feature" that only displays an 8bit image pyramid when zooming into a 16bit image, causing banding where there is no actual banding, further confusing users.
All the result of Photoshop's legacy core code that hasn't seen an update to fix these issues since 16bit mode was introduced a long time ago.
To top it off, Photoshop's 32bit mode is still restricted in terms of functionality.
Of course, for most users it is "good enough", I suppose.
Myself, I switched to different alternatives that actually offer a fully supported 16bit and 32bit per channel image processing & compositing workflow.
Copy link to clipboard
Copied
Are you showing is a mask or the edits of the high bit data 'under' the mask? And what edits?
And of course, there are masks in Adobe's raw processors.
There are very very few true 16-bit capture devices, and again, no matter; Photoshop is 15+1 bits, always has been. The test is to illustrate data loss and banding, editing on any high bit data.
Copy link to clipboard
Copied
Just took this test file: https://www.dropbox.com/s/s14f4w7dq85r7oo/10-bit-test-ramp.zip?dl=0
Open in Photoshop.
Make a rectangle marquee over half.
Save Selection.
Target just that Alpha Channel, make a gradient (black to white).
Now use Load Selection which is a gradient selection of the data above.
Pull Levels (a lot).
Result: No banding in the edited data. On my high bit video path, the edited gradient looks as smooth edited as it did prior to edits through this '8-bit mask'.
I could pop a screen capture but a JPEG wouldn't look right.
Download and do the same test. I see no issues editing high bit data, designed to show banding, any issues after editing through the mask.
Copy link to clipboard
Copied
I agree that 8 bit selections is a real shortcoming in Photoshop.
But it stops there. The rest of the discussion is about low bit depth (8 bits) vs. high bit depth (14 bits and over). Discussing 15 or 16 bits is like discussing how many angels can dance on the head of a pin. I'm sure it has theoretical interest to some, but it has no practical significance. I'm too busy getting work done to be concerned.
Copy link to clipboard
Copied
8 bit selections also apply in 32 bits/channel mode so are not at the root of any difference between 16 bit and 32 bit that the OP is seeing.
Dave
Copy link to clipboard
Copied
I totally understand and feel much the same way. Why they are not moving forward with technology (unless they are having financial problems) is not good IMO. I get banding in 16 bit files sent to me for post proceessing for exhibition print ALL THE TIME! Selection masks, ontop of selection masks, and altered/adjusted selection masks, on top of the same over and over... is not only ver popular workflow today for so many photo artists, it's also a recipe in degrading, posterization, bakding... especially the larger one prints, the more noticible the damage.
BUT, having said that... there is somthing going on in the 32-bit space that allows more of it, before the damage starts to show. I still do not know exactly what.
Copy link to clipboard
Copied
@Camera Raw Fan wrote:
I get banding in 16 bit files sent to me for post proceessing for exhibition print ALL THE TIME!
Banding from 16-bits on what printer? I've never seen this on any output device FWIW (going back to Lightjet output).
Now take an 8-bit per color document with banding and convert to high bit, you'll still have banding.
Copy link to clipboard
Copied
All types. Inkjet, laser, metal... Its in the file. It is the nature of how the current tend of popular younger landscape nature photographers sometimes process their files (anything goes type of approach) with the aforementioned countless selection masks and damaging adjustments. It never happens with conservative types of adjusting.
Copy link to clipboard
Copied
Its in the file because someone did edits outside high bit data, that's the only explanation. Take high bit data from the source (high bit scan, or raw) edit in high bit, no banding on output. Take 8-bit per color data, edit severely or save as JPEG and send to a printer, yeah, there could be banding on output. Most output devices if sent high bit data don't send high bit data to the driver and printer anyway. And the few that do, zero colorimetric difference in the output. I've done the measurements using an Epson 3880 where the Mac only driver does actually send high bit to the printer. Send out thousands of patches of colors in high bit, convert to 8-bit per color and measure both prints on a Spectrophotometer; deltaE far lower than 1.0 meaning invisible difference. And that's with a driver that hands off high bit data to the printer itself.
If someone is supplying high bit documents and the output bands, its the printer or the data wasn't edited in high bit from start to finish. Again, saving 8-bit per color data in high bit buys nothing and doesn't address the banding that existed from the 8-bit per color edits. OR the printer is introducing banding.
And again, in my tests, using masks, there was zero banding on the resulting edited data. I've supplied a URL to the document designed for this kind of torture testing and the process I used; test it yourself! Make a print.
Copy link to clipboard
Copied
It is in the initial thread here, but I have seen the full workflow of some of my clients (as the example above) and it does occur in 16-bit. I can even create it myself in 16-bits! Not a problem. I just had some in an image I was processing the other day. Ive been seeing it in images for almost 2 decades. Im sorry, but this part of it is not debatable.
:star:️ What I am really interested in finding out, are the initial questions above...
Thank you for any help in this area.
Copy link to clipboard
Copied
I think this discussion has run its course. We're not getting any further. I don't doubt that you see banding, but the cause is not what you think it is. You need to look elsewhere. 16/32 bits is a red herring.
There's an interesting concept known as false precision, which I think applies here. Basically, it means performing calculations at a higher precision than the system as a whole resolves. Or in other words - precision is not necessarily the same as accuracy.
Copy link to clipboard
Copied
First you told us about banding that IS in your display path as outlined by the fact you saw banding in data that has none.
OK, you say you have high bit data that bands on output. Maybe you can supply an actual TIFF.
But I have to agree D Fosse that this has run its course and that I've never seen any evidence that a full high bit data (document) has ever produced banding on an output device that didn't create the banding. Kind of like your previews of banding that don't exist in the actual data (gradient first shown or others provided).
Copy link to clipboard
Copied
Without an actual example document and the steps to produce the issue(s) - the start document and end result that demonstrates the issue(s) - it is not going to be resolved here.
You mentioned you can reproduce it yourself easily in 16bit mode in Photoshop.
Would you be able to share an example with us?
Find more inspiration, events, and resources on the new Adobe Community
Explore Now