• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers

Getting correct point param value when resizing effect applied

New Here ,
Dec 07, 2022 Dec 07, 2022

Copy link to clipboard

Copied

I'm using the After Effects API to create plugins for Premiere Pro.  For transitions the incoming pixel buffers are always the full sequence size, so point params always line up nicely between the value I receive and the desired position in the pixel buffers.  For effects I encounter a problem though when clips are not sequence sized and other effects are stacked before mine.

 

Retrieving the param value for GPU renders gives you a 0.0-1.0 based value, where 0.0 is the left/top most edge of the pixel buffer, and 1.0 is the right/bottom most edge.  The buffer size is also given, so this is easy to translate to exact pixel positions.

 

For software renders, the returned value is in pixels.  In my code I translate that back to a 0.0-1.0 range to at least have one common code base between software and GPU renders, but for this conversion I need to know where the clip's own pixels start (since the incoming pixel buffers are sequence sized), and how big the clip is.  When I apply just my effect to a node, this can be done by looking at in_data->pre_effect_source_origin_x/y and in_data->width/height, icw the downsample factors;

 

int x = int((FIX_2_FLOAT(paramValue) - in_data->pre_effect_source_origin_x) / (in_data->width * float(in_data->downsample_x.num) / in_data->downsample_x.den));

 

This way my x is consistent between GPU and software renders, and it also matches the crosshair param overlay that Premiere itself draws for the param.  Nice!

 

However... when I apply other effects before mine, all bets are off, even for standard Premiere effects.  Some effects expand the pixel buffer, and then the above calculations do not match the Premiere built-in point param anymore.  And the amount of mismatch also differs between software and GPU renders, which means my plugin will not be able to give consistent results anymore once users start switching between renderers (software vs. GPU).

 

What I found is that "Distort\Transform" and "Perspective\Drop shadow" enlarge the buffer to what they need, and for GPU renders that expanded pixel buffer is exactly what my plugin receives.  The point params are still 0.0-1.0 over the now expanded buffer, so when I draw a blob at the point's location it doesn't match the overlay anymore.  For software renders I can either choose to remap the point param values using the method from above, but then it differs from what GPU renders result in.  I could also choose to remap the point param to the entire incoming pixel buffer (PF_LayerDef), but then it differs even more.

 

Other effects, like "Perspective\Radial shadow" enlarge the pixel buffer to the entire sequence size; it seems it just wants to process the entire incoming PF_LayerDef.  So it's quite random to predict what other effects higher up have done with the pixel buffers...

 

I also tried to find out if I could get a hold of the effect's clip and/or media node using the video segment suite (to query properties like kVideoSegmentProperty_Media_ClipScaleToFrameSize to see what they reveal), but that turned out to become complex quite fast from within software renders.

 

How can I compensate for this when reading out point param values?  Or are users generally already used to effects not giving identical render results for software vs. GPU renders?  "Generators\4-color gradient" e.g. will have it's point param overlays also desync with the point param overlays when "Perspective\Drop shadow" is applied before it, while if you apply "Perspective\Radial shadow" this doesn't happen.  4-color gradient is not a GPU accelerated effect, so I can't check how it would behave there.  But when applying "Distort\Offset" after "Perspective\Drop shadow", the resulting output is totally different between GPU and software renders.

TOPICS
Error or problem , Hardware or GPU , SDK

Views

487

Likes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Dec 07, 2022 Dec 07, 2022

Copy link to clipboard

Copied

Closely related to my other question: since native Premiere point param overlays do not get activated for transitions, I already draw my own overlay during my transition's regular render to assist users.  However, if I can't get Premiere's built-in overlay to match with what I finally draw in my effects, I'd rather also use my own overlay there too, and hide Premiere's overlay.  I tried setting the PF_PUI_CONTROL flag when adding the point param and then just not responding to the PF_Event_DRAW event, but Premiere's own overlay still shows.

 

Can I hide Premiere's own point param overlay, and if so: how?

 

 

[Moderator note: points joined because of the similar questions from a single user.]

Likes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Dec 07, 2022 Dec 07, 2022

Copy link to clipboard

Copied

LATEST

This would be either an Ae question, or perhaps suited for @Bruce Bullis .

 

Neil

Likes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines