Skip to main content
Known Participant
February 24, 2013
Question

Why embed color profiles?

  • February 24, 2013
  • 1 reply
  • 27159 views

I understand color profiles as translators of the image data for the device to show as it shoudl appear (to me, the human eye)

Why not apply this translation as a conversion (profile tells how the color values shoudl appear, right?) and done with it. Then the monitor should use it's profile to show the file correctly.

The whole idea of "working space" or choosing/applying a profile to an image seems only to apply this translation (the profile).

Why ask? Or why even have an image + profile, when one can simply do image x profile = a single profiled origianl image? In RAW conversion is this not what we are doing? And if so, why have the profile name attached to the file, when it is already converted?

Or did I got it all wrong? When I think I get it, soon I start to see - may be I am not getting it

This topic has been closed for replies.

1 reply

TheDigitalDog
Inspiring
February 24, 2013

The profiles are the key's to showng you the colors 'correctly' but the display is a very device dependant color space! Working space are not (they are Quasi-Device Independent). Since Photoshop 5, the display color space is divorced from the editing of the images which are in this Quasi-Device Independent RGB working space. None of the idiosyncrasies of the display color space is therefor applied to the data we edit our images in.

In raw processing, there is first an assumed color space of the captured data which is essentially Grayscale data. Pretty much only the camera manufacturers have a really good idea of the native color space (based on the sensor, the color's used on the filters etc). Once a converter makes such an assumption, there is usually an internal raw processing color space such as linear ProPhoto primaries in Adobe raw conveters after which one can ask for any output color space the converter supports (four in ACR, any number of RGB color spaces in Lightroom). Again, the numbers have a numeric value and an associated color space that along with the display profile produce a 'correct' preview.

The display has it's own unique color space and hence ICC profile to define it, the data in an RGB working space have a different and independent color space with again, an ICC profile to define that data. That's one reason why the display color space is divorced from the editng space. One can have a vastily larger and differently shaped color space that defines the data from the color space of the display (why clip the editing color space to something even close to Adobe RGB (1998) let alone sRGB)?

Author “Color Management for Photographers" & "Photoshop CC Color Management/pluralsight"
IdamIndiaAuthor
Known Participant
February 24, 2013

At the very start, when image is taken and a profile (capturing device dependent) is applied or assumed. The profile is attached or embedded. My question is just for this very step, not when the display profile is used to interpret for display characteristics/profile. Why is the original file not a "converted" file?

I am yet to grasp the point of working space vs display space. It makes sense, but I don't completely get it - is the display profile not related to working space? Working space is after all about the actual value of the pixels in file, while display is how it is rendered. While working, editing, I am actually seeing the display to make the edits.

The clipping point you make at the end gives some clue, I understand it as clipping that I can't really see in the image (as full black or full white) in lightroom or pshop, but when I enable show-clipping I notice it. So that clipping shown is based on the working space data not the display. The display may be showing even larger area in all black or all white. I mention this to confirm if I understand the concepts correctly.

If the original is converted, as per my original query, I can imagine the need for the key for display and print and therefore the need for a profile. The moment the display or print conditions change, those output profiles need to change. But why not rely on the original image as is (convert based on it capture device profile - since that is not going to change)?

TheDigitalDog
Inspiring
February 24, 2013

IdamIndia wrote:

At the very start, when image is taken and a profile (capturing device dependent) is applied or assumed. The profile is attached or embedded. My question is jsut for this very step, not when the display profile is used to interpret for display characteristics/profile.

Because that data isn't anything like nor should be described by a display profile. Unless the data IS from an actual display. The data is what it is, a set of numbers that need a scale (which is what the profile does for THAT data). The display is what it is and needs a profile to define that device. One profile by itself is like one hand clapping. You can define that device's numbers with one profile but you can't display it. You need another profile for that.

If you speak German and I don't, we can't communicate. If you speak German and I speak English, we still have an issue. If you speak German, I speak English and your friend speaks both, we have a translator and can converse. The original data is German. The Display is English. The profiles and as importantly, the profile connection space (your friend) allows your color space and my color space to produce an understood conversation.

Author “Color Management for Photographers" & "Photoshop CC Color Management/pluralsight"