Copy link to clipboard
Copied
Hi gang;
I believe I am using the PF_Pixelformat function correctly, yet I am getting incorrect results.
My plugin should use 8 and 16 bits per channel color depths only.
So in the GlobalSetup I use the flag:
PF_OutFlag_DEEP_COLOR_AWARE;
Then, in my render function I declare the following:
AEGP_SuiteHandler suites(in_data->pica_basicP);
PF_WorldSuite2 *wsP = NULL;
PF_PixelFormat format = PF_PixelFormat_INVALID;
ERR(suites.Pica()->AcquireSuite(kPFWorldSuite, kPFWorldSuiteVersion2, (const void**)&wsP));
ERR(wsP->PF_GetPixelFormat(output, &format));
And finally, I can check my color depth mode by using:
if (format == PF_PixelFormat_ARGB32) // 8 bit
if (format == PF_PixelFormat_ARGB64) // 16 bit
if (format == PF_PixelFormat_ARGB128) // 32 bit (I am not using this color depth)
The problem is that when I test this in After Effects, it does not differentiate between 16 bit and 32 bit color mode. It tries to the generate the effect even if I switch to 32 bit color mode, and then crashes. Between 8 and 16, it certainly differentiates.
I should mention I am checking out another layer with this plugin but I don't think that could be the issue. What could I be doing wrong here?
Thanks in advance,
-Rich
Copy link to clipboard
Copied
Hi Richard
One thing to check, are you using smartFX? If not, your effect can't be 32bits, just 8 (or 16 if PF_OutFlag_DEEP_COLOR_AWARE is set like you have).
Copy link to clipboard
Copied
James is right, since you claim to receive layer pixels also in 32bit mode, you are most likely developing a SmartFX plugin. A SmartFX plugin (designated by you setting the PF_OutFlag2_SUPPORTS_SMART_RENDER flag) will always have to support 32bpc!
Here is the matrix as far as I know:
1. PF_OutFlag2_SUPPORTS_SMART_RENDER is not set:
- and PF_OutFlag_DEEP_COLOR_AWARE is not set: you have to implement 8bpc in PF_Cmd_RENDER
- and PF_OutFlag_DEEP_COLOR_AWARE is set: you have to implement 8bpc and 16bpc in PF_Cmd_RENDER
2. PF_OutFlag2_SUPPORTS_SMART_RENDER is set:
- and PF_OutFlag_DEEP_COLOR_AWARE is not set: you have to implement 8bpc and 32bpc in PF_Cmd_SMART_RENDER
- and PF_OutFlag_DEEP_COLOR_AWARE is set: you have to implement 8bpc, 16bpc and 32bpc in PF_Cmd_SMART_RENDER
For efficency reasons, it is strongly recommended to support the newer SmartFX API and support 32bpc!
Copy link to clipboard
Copied
Hi James and Toby;
Thanks for your replies.
Nope, I'm not interested in making a SmartFX plugin. As a matter of fact, I'm not interested in 32 bit at all, just 8 and 16 but t looks like it cannot differentiate between 16 and 32. This means if I set it to 32 bit, it still understands it as 16 (for some reason) and still performs the effect (which it shouldn't). I am not using the smart render flag you mentioned above anywhere, Toby.
So even though I am performing my effects only under the following conditions:
if (format == PF_PixelFormat_ARGB32) // 8 bit
if (format == PF_PixelFormat_ARGB64) // 16 bit
It still performs under 32 bit and since it's not made for that, it crashes.
Perhaps I should be using a different method to check color depth?
For instance, in this thread: Re: Need help processing raw image data They recommended using extra->input->bitdepth; which seems to return 8,16 or 32.
Any thoughts?
Thanks,
-Rich
Copy link to clipboard
Copied
Even if you don't have a SmartFX plugin, it can and will of course be usable/applied also in 32bpc mode in AE.
So putting a non-SmartFX plugin on a layer in 32bpc mode will (as far as I know) cause AE to convert the layer pixel data from 32bpc to 16bpc (if deep-aware) or 8bpc (if not deep-aware) and then send that to your plugin.
What is the specific reason for only supporting the older interface?
You could easily make a SmartFX plugin and convert incoming 8bpc or 32bpc pixels into 16bpc and then apply your 16bpc algorithm on that, if you want.
Copy link to clipboard
Copied
Or to put it another way: if you are not having a SmartFX plugin, you CAN'T determine 32bpc color depth - this is intended behaviour and was the main reason SmartFX was created in the first place. The old API wil only get you 8 or 16bpc, and when applied on a 32bpc layer, you will still get 8 or 16bpc (converted) data.
Copy link to clipboard
Copied
Hi Toby;
I see... Well, my reasoning for starting with the CS6 SDK was that this was my first time working with the Adobe SDK and also Visual Studio so I thought that it would be a simpler entry with this SDK. Now that I've learnt so much of it (somewhat), I'm hesitant to jump into a newer one but maybe this is the best approach. Although, I wouldn't even know where to begin with a newer SDK.
I guess if I stick with the current CS6 SDK, based on what you said above, my two current options to fix this would be to add 32 bit support or limit the plugin to just 8 bit.
It also doesn't sound like the extra->input->bitdepth; function would offer any different behaviour (or advantage) and would cause the same issues based on how you've described the plugins to work.
If I don't stick with the current SDK, which SDK version would you recommend? The latest? I am also using VS 2012 so I don't know if I'd need a newer one for a newer SDK...
Thanks,
-Rich
Copy link to clipboard
Copied
It just occurred to me that when you ask why I used an older interface you are probably asking why I am not using the PF_Cmd_SMART_RENDER! Not why I am using an older SDK! Duh. ![]()
Ok, so you are suggesting I use the Smart Render function instead of my current PF_Cmd_RENDER. At least that is what I understand.
So let me look into that. Especially if you suggest that is the way to go.
Thanks,
-Rich
Copy link to clipboard
Copied
Why do you want to change SDK?
SmartFX was declared to be the "new"/recommended AE plugin standard in the AE 7.0 SDK in 2005 (long before any CS* SDK versions)! Please read the CS6 SDK documentation a bit more, you'll find some more information and explanations there.
You can stay with the CS6 SDK and VS2012 develop with that, both render and smartrender plugins, for AE CS6 and newer versions.
No idea about extra->input->bitdepth, extra is a void pointer, it depends on the function call what you can cast it to, but I doubt you will get anything useful out of it in the old API. Stick to the recommended suite functions for getting bitdepth.
Your best option: make a SmartFX plugin (there are some templates/examples in the CS6 SDK), implemement a conversion function to convert your input data from 8, 16 or 32bpc into the format you want, do the processing, convert it back to the target format. Pretty easy. These days most plugins internally convert all input to 32bpc, I'd say.
Copy link to clipboard
Copied
Although I have to admit I still don't understand your initial reasoning.
If you really want a plugin that works only in 8bpc and 16bpc, it seems you already have that!
PF_OutFlag_DEEP_COLOR_AWARE is set, PF_OutFlag2_SUPPORTS_SMART_RENDER is not set.
The only thing you were confused about is that in 32bpc mode you still get 16bpc data, but as I said, that is to be expected, as "out of courtesy" After Effects does the 32bpc -> 16bpc conversion for you under the hood.
Copy link to clipboard
Copied
Hi Toby;
I hope I haven't driven you away with the length of this thread and that you're willing to bear with me a little longer... ![]()
I am going to convert this to a Smart plugin as you have suggested. But first, I want to know why, if you said I set this up properly to work with 8 and 16 bit color, it still crashes.
I traced the crash to the actual checking in of another layer:
// Read pixel date from layer function (helped from Francois)
ERR(PF_CHECKOUT_PARAM( in_data,
LAYER_DISK_ID,
in_data->current_time,
in_data->time_step,
in_data->time_scale,
&check));
The above, when I set my color depth to 32 bit, hangs the system. It works in 8 bit and 16 bit as it should. You said that in 32 bit, AE is simply converting the 32 bit color data to 16 "out of courtesy" but for some reason or other, it's still not the same or the above wouldn't hang the system since 16 bit color mode doesn't.
If I comment out the checkout function, changing to 32 bit doesn't hang the system. Should I be performing some additional checks I may not be that might be leading to the hanging?
I want to re-iterate that I very much appreciate your help and it's helping me tremendously in learning all this. Any leads that come to mind I can check?
Regards,
-Rich
Copy link to clipboard
Copied
Hi Rich,
no problem, always glad to help if there is time.
And you are absolutely right, it should not crash or hang with what you intended to do.
The function call you posted looks great to me, at least without context. Maybe some of your parameter naming/IDs are mixed up?
Why don't you check out the "Checkout" sample in the CS6 SDK? It does just this, it is using the old render interface, so only 8bpc and 16bpc, it has two layer inputs and it combines them into one output layer. I just tested this sample with AE CS6 and VS2012 and it works correctly with 8bpc, 16bpc and 32bpc without crashes.
Copy link to clipboard
Copied
Hello,
This is an old question so you may very well have succeeded in writing your plugin, however I thought I'd offer my thoughts on writing plugins to handle all bit depths efficiently. (I've been away from this forum for quite a while!)
Personally I find the way the examples are written to be inefficient - they tend to have render functions for each bit depth, essentially repeating the code but changing the pixel struct (i.e. PF_Pixel, PF_Pixel16 and PF_Pixel32) and the corresponding channel variable types (A_u_char, A_u_short, PF_FpShort).
Instead I write a single render function that works in floating point, and then have three iteration functions that, in the case of 8 & 16-bit modes convert the pixel data to float (and in the case of 32-bit mode, leaves it as is), call the 32-bit processing function, then if necessary convert it back again.
This might seem counterintuituve but, depending on the complexity of your plugin render function, can save time debugging and will always render the same result regardless of the bit depth of your project. I also don't see any noticeable lag doing things this way when using my plugins.
Copy link to clipboard
Copied
Yes, what Christian wrote (and what I also suggested further above) is the way most plugins are (or probably should be written) these days. Convert all input formats into one intermediate format (for example I convert to RGBA 32bit float), then do your processing on that buffer(s), and finally convert it back to the desired output format of the host. That way your processing function stays portable and you can easily add other pixel formats or hosts to support. The conversion can also be done quite efficiently using either the iterate functions provided by Adobe or simply standard C++ threads to do line by line or block processing of the pixels in memory.
Get ready! An upgraded Adobe Community experience is coming in January.
Learn more