Skip to main content
Known Participant
February 28, 2023
Answered

Apply Image in LAB Mode Funky Result

  • February 28, 2023
  • 4 replies
  • 5603 views

Hi, so I'm learning LAB mode and I'm trying to separate a red dress from a green background (I'm aware that in this case there are easier methods, but I'm trying to learn the mechanics of LAB). So, I create a blank layer and to Image>Apple Image>Background>Channel A. Instead of getting a greyscale image of that channel, I'm getting something in orange-green. 

 

What am I doing wrong?

 

Before doing the procedure:

 

After attempting the procedure:

 

 

Thanks, 

Daniel

This topic has been closed for replies.
Correct answer c.pfaffenbichler

The target in that screenshot is the RGB image, not the Lab image. 

4 replies

TheDigitalDog
Inspiring
February 28, 2023
quote

Hi, so I'm learning LAB mode and I'm trying to separate a red dress from a green background (I'm aware that in this case there are easier methods, but I'm trying to learn the mechanics of LAB).


By @McCurrySteve

For what end results? 

As outlined correctly, there's a potential for image damage in this process. The original image data you're working with generally comes from what, in what color space and bit depth? 

Lab as a color model for image correction has been largely oversold (we can go there). 

It's a great device-independent color model for some uses (outside image correction). The true mechanics of Lab. 

Author “Color Management for Photographers" & "Photoshop CC Color Management/pluralsight"
Known Participant
February 28, 2023

Oh, this section of the course was just showing the various ways that the same mask could be created: hue mask, saturation mask, various forms of channel mixing, etc...and Lab was one of the illustrations. Basically a duplicate of the image was created and converted into LAB mode to extract "a" channel mask (which was to be used in the RGB original).

 

The instructor just wanted to show that you need to analyze the image to determine which kind of mask will get you the results you are looking for most efficiently.

 

I'm not a Lab acolyte. I'm just an amateur photograph who likes to learn about how to develop images.

 

I don't really work with jpegs, so a lot of the color balance stuff can be done mostly in Camera Raw and some fine-tuning is done in PS. I use PS because I find PS to be more precise than ACR or LR. And the creative possibilities are richer in PS, as well.

 

From my first few forays into Lab mode, it seems like a very efficient way to fine tune colors without having to do a lot of work not affect the neutral colors.

 

I'd be happy if you'd enlighen me as to why Lab is oversold? 

TheDigitalDog
Inspiring
February 28, 2023
quote

I'd be happy if you'd enlighen me as to why Lab is oversold? 


By @McCurrySteve

I'll let the late, great Bruce Fraser do so. From a post that dates back to the 1990s, long before we had raw workflows and raw converters that (begging the question I asked you) existed. 

Show it to your instructor, ask him/her if they know of Bruce. If not, you need a new instructor. 😂

 

"Let me make it clear that I'm not adamantly opposed to Lab workflows. If

they work for you, that's great, and you should continue to use them.
 
My concern is that Lab has been oversold, and that naive users attribute to
it an objective correctness that it does not deserve.
 
Even if we discount the issue of quantization errors going from device space
to Lab and vice versa, which could be solved by capturing some larger number
of bits than we commonly do now, (though probably more than 48 bits would be
required), it's important to realise that CIE colorimetry in general, and
Lab in particular, have significant limitations as tools for managing color
appearance, particularly in complex situations like photographic imagery.
 
CIE colorimetry is a reliable tool for predicting whether two given solid
colors will match when viewed in very precisely defined conditions. It is
not, and was never intended to be, a tool for predicting how those two
colors will actually appear to the observer. Rather, the express design goal
for CIELab was to provide a color space for the specification of color
differences. Anyone who has really compared color appearances under
controlled viewing conditions with delta-e values will tell you that it
works better in some areas of hue space than others.
 
When we deal with imagery, rather than matching plastics or paint swatches,
a whole host of perceptual phenomena come into play that Lab simply ignores.
 
Simultaneous contrast, for example, is a cluster of phenomena that cause the
same color under the same illuminant to appear differently depending on the
background color against which it is viewed. When we're working with
color-critical imagery like fashion or cosmetics, we have to address this
phenomenon if we want the image to produce the desired result -- a sale --
and Lab can't help us with that.
 
Lab assumes that hue and luminance can be treated separately -- it assumes
that hue can be specified by a wavelength of monochromatic light -- but
numerous experimental results indicate that this is not the case. For
example, Purdy's 1931 experiments indicate that to match the hue of 650nm
monochromatic light at a given luminance would require a 620nm light at
one-tenth of that luminance. Lab can't help us with that. (This phenomenon
is known as the Bezold-Brucke effect.)
 
Lab assumes that hue and chroma can be treated separately, but again,
numerous experimental results indicate that our perception of hue varies
with color purity. Mixing white light with a monochromatic light does not
produce a constant hue, but Lab assumes it does -- this is particularly
noticable in Lab modelling of blues, and is the source of the blue-purple
shift. 
 
There are a whole slew of other perceptual effects that Lab ignores, but
that those of us who work with imagery have to grapple with every day if our
work is to produce the desired results.
 
So while Lab is useful for predicting the degree to which two sets of
tristimulus values will match under very precisely defined conditions that
never occur in natural images, it is not anywhere close to being an adequate
model of human color perception. It works reasonably well as a reference
space for colorimetrically defining device spaces, but as a space for image
editing, it has some important shortcomings.
 
One of the properties of LCH that you tout as an advantage -- that it avoids
hue shifts when changing lightness -- is actually at odds with the way our
eyes really work. Hues shift with both lightness and chroma in our
perception, but not in LCH.
 
None of this is to say that working in Lab or editing in LCH is inherently
bad. But given the many shortcomings of Lab, and given the limited bit depth
we generally have available, Lab is no better than, and in many cases can be
worse than, a colorimetrically-specified device space, or a colorimetrically
defined abstract space based on real or imaginary primaries.
 
For archival work, you will always want to preserve the original capture
data, along with the best definition you can muster of the space of the
device that did the capturing. Saving the data as Lab will inevitably
degrade it with any capture device that is currently available. For some
applications, the advantages of working in Lab, with or without an LCH
interface, will outweigh the disadvantages, but for a great many
applications, they will not. Any time you attempt to render image data on a
device, you need to perform a conversion, whether you're displaying Lab on
an RGB monitor, printing Lab to a CMYK press, displaying scanner RGB on an
RGB monitor, displaying CMYK on an RGB monitor, printing scanner RGB to a
CMYK press, etc.
 
Generally speaking, you'll need to do at least one conversion, from input
space to output space. If you use Lab, you need to do at least two
conversions, one from input space to Lab, one from Lab to output space. In
practice, we often end up doing two conversions anyway, because device
spaces have their own shortcomings as editing spaces since they're generally
non-linear.
 
The only real advantage Lab offers over tagged RGB is that you don't need to
send a profile with the image. (You do, however, need to know whether it's
D50 or D65 or some other illuminant, and you need to realise that Lab (LH)
isn't the same thing as Lab.) In some workflows, that may be a key
advantage. In many, though, it's a wash.
 
One thing is certain. When you work in tagged high-bit RGB, you know that
you're working with all the data your capture device could produce. When you
work in Lab, you know that you've already discarded some of that data.
 
Bruce"
 
So again, what can't you do, with raw and a good raw converter that demands rendered Lab data? 
Author “Color Management for Photographers" & "Photoshop CC Color Management/pluralsight"
NB, colourmanagement
Community Expert
Community Expert
February 28, 2023

@McCurrySteve My 10C is that there's a potential for image damage in this process -

When converting an image into Lab it's going into a very large colour space, so that means the number of values available to define, say, areas of gradation are reduced significantly because the Lab colourspace is not entirely populated.

 

You can see this by looking at the histograms.

 

A "normal" RGB image may well completely populate the histogram, thus using all of the 256 values per channel to define the colour and tone in the image.

That same image moved to Lab, because Lab as very big colourspace, will populate a smaller amount of the histogram in the a and b channels (defining colour) - of course the L channel is still "full, it's still full range lightness .

 

Here's an RGB image  (you can download it here to test this - get the Adobe RGB testimage: https://www.colourmanagement.net/index.php/downloads_listing/ :

 

here's it's histogram

   

 

I then converted to Lab, here are the 2 relevant histograms:

 

 

As you can see the RGB data which was previously spread over the entire 256 levels (in 8 bit) is now taking up quite a bit less of the potential values across the coloiurspace.

Converting back to RGB doesn’t fix this. Yes, the data spreads to fill the RGB colourpsce - but adjacent pixels which differed may now be the same - this may cause posterisation (sometimes called contouring) in gradations.

 

It's definitely worth testing whether this is affecting your post processing image quality. There has to be a REALLY good reason to use Lab for image editing. 

 

I hope this helps
neil barstow, colourmanagement net - adobe forum volunteer - co-author: 'getting colour right'
google me "neil barstow colourmanagement" for lots of free articles on colour management

 

 

Known Participant
February 28, 2023

Neil, thank you for such a detail response. So, I first start learning about Lab as part of a comprehensive photoshop course. The Lab section showed the rudiments of how to use it. Putting a control point in the center of the a and b channel before making adjustments. The idea that you can separate tonal contrast from color correction/modification. How you can create "color contrast." The idea that you can modifiy the colors in an image without affecting neutral tones. It seemed to me like a great tool to use when you want to fine tune the colors.

 

The course uses jpegs for illustrative purposes, but for my workflow I work with RAW images and then send them to PS, usually, in Prophoto RGB. 

 

I imagine that a raw file would not result in noticeable posterization? 

 

Also, I do realize that a lot of what can be done in LAB mode can be done in RGB mode. It just seems to quick and efficient.

 

I'd be glad if you could enlighten me further.

TheDigitalDog
Inspiring
February 28, 2023

What can Lab provide that you currently cannot do with the raw in a good raw converter? 

Author “Color Management for Photographers" & "Photoshop CC Color Management/pluralsight"
c.pfaffenbichler
Community Expert
Community Expert
February 28, 2023

Putting the same content in all three Channels of an Lab Image will result in something like that because you have identical content for Luminance, Red-Green, Blue-Yellow. 

Known Participant
February 28, 2023

Thanks for replying. Here's a screenshot from a tutorial I'm following, and it seems that he's doing the exact same thing but achieving the desired result. 

 

c.pfaffenbichler
Community Expert
c.pfaffenbichlerCommunity ExpertCorrect answer
Community Expert
February 28, 2023

The target in that screenshot is the RGB image, not the Lab image. 

Known Participant
February 28, 2023

Here's the Apply Image dialogue box with the settings I'm using: