Somehow the quote "I reject your reality and substitute my own" comes to mind here. :)
No, my explanation of alpha channels is not "wrong" in any way, shape, or form. If it is, please point out my error, and back it up with documentation that it is in error. And the incorrect TGA format documentation that tends to accompany some Adobe products most *certainly* do not count.
The reason Adobe is very stubbornly turning a blind eye to proper premultiplied behaviour, is because they use a straight alpha pipeline internally. This pipeline is *incapable* of representing a color such as premultiplied (1,1,0,0) (the "additive yellow" I mentioned in last post). Due to this complete inability to even represent this, they have a vested interest in defining it out of existence and claiming it "wrong", while it is not. Which I bet accouts for 90% of the stubbornness preceived in this thread.
Adobe products use a "straight" alpha (where alpha is considered more as a "mask" than opacity) because they stem from a paint tool background, where a "mask" is something you create out of pixels that are already there.
With that mindset, saying you have "multiplied with the alpha" in a premultiplied file makes sense.
But from the context of a renderer, this makes no sense. In a renderer, it's about taking samples off some geometry. In the simple case of opaque objects, lets say we take 16 samples in a pixel. 12 of these hit an opaque yellow (1,1,0,1) object, and 4 of these hit the background (0,0,0,0).
When summing up these samples, we get a color that is (0.75,0.75,0,0.75). This is not because we have "multiplied with alpha", it is simply because there was a *coverage* of an entity with an alpha of one in 75% of the pixel.
Now assume instead that the background was red (1,0,0,0), which is quite legal. The downsampled pixel would have a color of (1,0.75,0,0.75). And pixels where NO samples hit would have a color of (1,0,0,0).
While this may not be so useful for traditional compositing, it can be useful for all sorts of other reasons. For example, a 3ds Max render always sets the alpha to 0 in pixels that are "background", even if they are filled with, say, a sky, or anything else. So there is completely legal RGB data there, but with an alpha of zero.
If this is put into an EXR, and loaded by Photoshop, this sky is now lost forever. So data is clearly destroyed in the process.
We furthermore have these luminiscent pixels. Go back to our object-against-a-background issue, but instead of making the object just yellow, lets make it luminiscent yellow, i.e. (1,1,0,0). When this object covers 75% of the pixel, and the background is transparent black (0,0,0,0), the final pixel will be 0.75,0.75,0,0.
This means that when this color is comped on top of the underlying layer, the premultiplied "over" operation, which is...
r = fg + bg * (1 - fg.a)
...will work out to an add, since the alpha is zero, and hence (1-alpha) is (1-0), e.g. 1, so the math boils down to
r = fg + bg
The fact is that photoshop simply cannot do this, because it is "straight alpha" internally.
After Effects is kind of limping along here with the "Luminiscent Premultiplied" mode, but it is still a hack and breaks down any time you add even the slightest effect, since the AE effects pipeline is also, unfortunately, straight alpha.
Straight alpha has a few merits (notably with multiplicative post math and in color correction), but ends up being needlessly complex (compared to the premultiplied math) for lots of other operations like blurs, filtering, etc. The math in the premultiplied case can simply treat the alpha as another channel, and apply the exact same operations to it as it does to RGB, for things like blurs, convolutions, whatnot. A "straight alpha" system has a much much harder time here.
But hey. What do I know? Apparently I've been "defined wrong" in this discussion, even though I have the inventor of the alpha channel on my side.
/Z