• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
0

Multi-layer format plugin. 16 bits depth problem.

Community Beginner ,
Dec 10, 2009 Dec 10, 2009

Copy link to clipboard

Copied

Hello!

I'm creating a format plugin based on the simpleformat example. When I load the layers using 8bits depth I have no problem, but when I try to load the layers with 16bits depth, the colors are just the double!!

Params with 8 bits:

gFormatRecord->depth = 8;

gFormatRecord->loPlane = 0;

gFormatRecord->hiPlane = 2;

gFormatRecord->colBytes = 3;

gFormatRecord->rowBytes = width * 3 * (depth / 8);

gFormatRecord->planeBytes = 1;

gFormatRecord->imgeMode = pluginModeRGBColor;

gFormatRecord->maxValue (not set)

Params with 16 bits

gFormatRecord->depth = 16;

gFormatRecord->loPlane = 0;

gFormatRecord->hiPlane = 2;

gFormatRecord->colBytes = 6;

gFormatRecord->rowBytes = width * 3 * (depth / 8);

gFormatRecord->planeBytes = 2;

gFormatRecord->imgeMode = pluginModeRGB48;

gFormatRecord->maxValue = 0x8000;

I load the image using its own SDK and I can load the layers indicating the depth desired (8, 16 and 32 bits). I made a simple program that loads the images and saves them to a tiff files, and there are no problems loading with any depth. So the buffers that I get in the plugin to give to Photoshop are correct.

I repeat, if I use the eyedropper, and pick over the same pixel in the layer loaded with my plugin, and in the image created with the format SDK, the color of the layer loaded with my PS plugin is exactly the double!!!

Any idea?

I attach an image comparing both images: the plugin result and the original image.

TOPICS
SDK

Views

1.7K

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe
Community Beginner ,
Dec 10, 2009 Dec 10, 2009

Copy link to clipboard

Copied

Wow. I made  the following...

...


  for(int i=0;i< gInfo->width*gInfo->height*3; i++)
  {
    ((unsigned short*)gFormatRecord->data) >>= 1;
  }

    if (*gResult == noErr)
  {
    *gResult = gFormatRecord->advanceState();
  }

...

And the result is correct!!!! All the data are 1 bit right misplaced! (in binary, is equivalent to divide by 2 obviously). Why?

I've readed that photoshop uses 15 bits instead 16 in 16-bit depth format. Is this the reason? Uses the 15 most-significant bits of the 16 bits data or something like this?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Participant ,
Dec 10, 2009 Dec 10, 2009

Copy link to clipboard

Copied

The 16-bit data range is 0..0x8000, so normalising to 0..0xffff is not merely a bit shift.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Dec 10, 2009 Dec 10, 2009

Copy link to clipboard

Copied

But normalising is not my business, is a PS business. I must pass the buffers with the depth that i have set, and PS should does the rest.

I'm still believing that the problem is that PS only uses 15 bits to represent the 16bit depth format. But i dont believe that i'm the first person that have that problem! Probably I do a mistake somewhere. Shifting one bit my buffer looks like a dirty solution

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Participant ,
Dec 10, 2009 Dec 10, 2009

Copy link to clipboard

Copied

It is slightly misleading to say "Photoshop uses 15 bits"; the data's range is 0..0x8000 inclusive. It's up to you to deal with that.

I'm sure Chris is tired of rehashing the 68K-era rationales - as I made him do once myself. (If Adobe had archived these forums you would be able to find prior discussions. If interested, perhaps try Google Groups.)

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Dec 10, 2009 Dec 10, 2009

Copy link to clipboard

Copied

The reasons for using 0..32768 still hold, regardless of the current processors.

As for the main problem -- does this only fail for layers, or for flat documents as well?

If flat also fails, check the SDK -- because we have many 16 bit flat file formats working.

But I can't think of another 16 bit layered format.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Dec 11, 2009 Dec 11, 2009

Copy link to clipboard

Copied

Hi Cris! I just check it deactivating layers to get the formatSelectorReadStart, Continue, etc selectors instead formatSelectorReadLayerStart, Continue, etc. and it works fine, so the problem occours only when I use 16bits on layers.

I didn't know that there were no 16-bits layered image formats. The case is that the image format that I am working with manage even 32bits depth data (it can be 1, 5... or 20 layers), so it must be interesting for us to open our layered format in photoshop with 32bits depth. I must talk with some coworkers and decide if we will use 32, 16, or only 8 bits for our file format in photoshop.

Thanks!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Dec 11, 2009 Dec 11, 2009

Copy link to clipboard

Copied

It just means you were the first to exercise that API, and that the person who added the layer support in Photoshop missed something with 16 bit layer data.

Because of that:  I'd set the maxvalue to 32768 (to avoid problems once the bug is fixed), and scale the data down to 0..32768 yourself (to deal with it until then).  Please note - that is not just a right shift!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Dec 14, 2009 Dec 14, 2009

Copy link to clipboard

Copied

But if my data is between 0 .. 65536 and right-shift all the data (divide by 2), the range become 0 .. 32768. Is not correct?

Furthermore, I compared the16-bits image right-shifted loaded in photoshop, and a correct 16-bit tiff image created with my sdk (for example, blending both images using the difference operator, and setting the brightness and contrast to top), and it makes no differences  bewteen them.

PD: I'm glad to have found a photoshop bug and contribute with the development

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Participant ,
Dec 14, 2009 Dec 14, 2009

Copy link to clipboard

Copied

Your data is most likely 0..0xffff, not 0..0x10000.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Dec 14, 2009 Dec 14, 2009

Copy link to clipboard

Copied

But if my data is between 0 .. 65536 and right-shift all the data (divide by 2), the range become 0 .. 32768. Is not correct?

No, that's not correct.

your data is 0..65535.  Shifting that right by one will get you 0..32767.

Again: Photoshop's representation is 0..32768  (simplifies a lot of math).

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Dec 10, 2009 Dec 10, 2009

Copy link to clipboard

Copied

I think the reason that 16 bit is in fact 15 is mostly legacy (probably Chris Cox can provide more details about it). The fact though is that yes, we only get 32K colors in the 16 bit mode.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Dec 15, 2009 Dec 15, 2009

Copy link to clipboard

Copied

LATEST

Oooook! I understand now!

So... the most significant bit is used only for full red, green or blue (or any mix of them). Am I right?

Anyway, right-shifting 1 bit give a little little error but I will never get a color overflow (0 when I had to get 255 i.e.). I think that the right shifting has less cpu cost than scaling between a new range, but it's possible that it is not perceptible (maybe in very large files).

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines