Exit
  • Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
0

Motion Blur - Accessing Layer's Transforms

Community Beginner ,
Mar 26, 2025 Mar 26, 2025

Yo

 

I've been trying for quite some time to make Motion Blur as a normal effect plug-in and I want to ask if it is even possible. Can I access the layer's transforms through a normal effect plug-in? What do I need to use?

 

I wanna make sure if it is possible or not. The only other way is making it an AEGP, right? I would like to avoid that.

TOPICS
How to , SDK
1.0K
Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Mar 26, 2025 Mar 26, 2025

either AEGP_GetLayerToWorldXform or AEGP_GetLayerToWorldXformFromView will get you tra transformation for a given time, which you can use to calc motion blur.

however, as for the rest of your question, it's a bit too broad. can you explain what you're trying to do in more detail?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 26, 2025 Mar 26, 2025

Thanks. 

 

At least now I know I can make it. Any simple example of how to use those to access the transforms? Would be helpful tbh.

 

And for the rest of the question, I was talking about making Motion Blur as an AEGP (Artisan ig) if making it a normal effect plugin was not an option.

 

Also, for a tiles effect, I use the same stuff, right? I need to access the layer's transforms for that one too.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Mar 26, 2025 Mar 26, 2025

if you're indeed making an artisan plugin, then we're entring the twilight zone...

first, let's tall 2 things apart:

1. the layer texture, where plug-ins render their output pre-layer-transformation, and the output is a 2d, untransformed image that's in the same coordinate space of the original layer image.

2. the layer transfomation, where the ready-to-use texture is then 2d or 3d transformed by the artisan and composited into the comp buffer.

 

now. "classical" motion blur, is where the transformation (clause 2 in the paragraph above) is repeaded multiple time in intervals between the transformation values of 2 points in time (usually the current time, and some point between the current time and the next frame's time).

some effects, such as prticale systems or element 3d, are not transfomed by the artisan, but instead mimik the comp's perspective and output an image that looks transformed even though the texture gets compsited at the layer's "rest" position in 2d.

 

now we're getting somewhere.

if you want to replace the comp's "built in" motion blur where layers move in 2D or 3D, then your can either write an artisan that does the WHOLE process of transforming and compositing, OR:

1. mimik the whole movement of the layer and have a simple layer plugin render the transformation along with the motion blur onto the layer texture, and the user would do that on an untransformed layer as he would with 3D effects.

2. ahve the layer transform normally, but you just add motion blur to the texture. that is tricky but possible. you'll need to deal with projections to tell what the next transformation of the layer would look like efrom the camera's point of view, and where that lands on the original layer at it's current position, and then blur the layer's image from it's rest position to the projected target position... if done correctly, you'll get a natural looking motion blur without forcing the user to work on a flat image and without writing an artisan. alas, in such a case i fail to see the point...

 

again, if you lay out the problem you're trying to solve, then parhaps i could help with a better strategy.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 26, 2025 Mar 26, 2025

Yeah, I would like to avoid making an Artisan to be honest. I was just talking about it because I thought that's the only way to access the layer's transforms. I'm glad I can avoid it.

 

My problem is accessing the layer's transforms. My current Motion Blur is a bit modified (focused on debugging). I want to focus on accessing the layer's transforms, the current implementation should make a small red box appear in top-left corner when motion is detected, but it does not work correctly at all. What am I doing wrong with the layer transforms accessing?

 

I am currently struggling with this. Like the blur code itself should be easy to make, but the motion detection/layer transforms accessing is the problem.

 

Btw this code is fully made with AI so of course there are problems. I'd really like some help with transforms accessing (yk maybe like an example plugin/code that does it correctly). I should be able to handle the rest.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 26, 2025 Mar 26, 2025

Lol weird, I cant upload files?

 

Anyway, I just need an example of layer accessing. How exactly do I use AEGP_GetLayerToWorldXform or AEGP_GetLayerToWorldXformFromView correctly?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Mar 26, 2025 Mar 26, 2025

nothing much to it. you pass a layer handle and a comp time, and receive a 4x4 matrix of the layer's transformation after all parenting, expressions, ect, in relation to the comp's origin of [0,0,0].

nothing else to explain here.

 

if you choose to use AEGP_GetLayerToWorldXformFromView, you get the same result, but with the view (camera or custom prespective) factored in as well. that, however, doesn't include the camera or view's perspective matrix.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 26, 2025 Mar 26, 2025

Thanks man.

 

Also do I need to register my plugin for AEGP? Kinda confused ngl, been trying a lot of stuff and I saw smth about registering for AEGP access or sum.

 

Oh and if I do get this Motion Blur to work and access the layer transforms, will it also work on other effects or my plugins? Or do I need to do something different for that? Asking cuz I will also try to make a Tiles plugin and I would ofc like it to work on other effects too.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Mar 26, 2025 Mar 26, 2025

most of the AEGP callbacks that ask for a plug-in id actually do just fine taking a null instead. i don't register my effects plugins with the register suite. (unless doing something VERY specific...)

 

as for using the motion blur on other effects, i must say i didn't fully understand what it is that you're trying to do that is different from AE's normal motion blur but still requires getting the layer transformation...

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 26, 2025 Mar 26, 2025

I am trying to code my own Motion Blur effect and I want to know if it will work or not on other effects (like Transform or S_BlurMoCurves or my own transform plugins) if I access the layer's transforms. And if it doesn't, do I need to do something specific to make it work on other effects?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Mar 26, 2025 Mar 26, 2025

well, motion blur is basically smearing the image from point A to point B.

when looking at motion coming from animting the transformation properties, then reading the transform value of the layer at 2 points in time and rendering multiple interpolation - will do the trick. (alas, it's not doing anything AE doesn't do natively.

S_BlurMoCurves does something similar, but with arbitrary transformations, and not the layer transformation. which makes sense as a plugin, because you wand just the "smear" and not the motion.

as for making it "work on other effects", plugins operate linearly, each processing the output of the previous one. so technically, you could apply motion blur to anything... both sides are indifferent to each other.

alas, when you say "work on other plugins" i bet you'd want to do something that correlates to the content. say you apply your plugin after a "mosaic" effect. what should the blur blur? left? right? zoom blur? that's up to you to figure out.

 

on another note, plug-ins can request their input at different times. if you apply your effect on a comp with content, you could request int he input at a serises of times between the current time sample and the next frame, and average these inputs. now you get a "motion blur" of that comp's inside animation, whatever it may be, without caring abount transformations.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 26, 2025 Mar 26, 2025

Alright, thanks man. I should be able to handle it now.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 27, 2025 Mar 27, 2025

Oh btw btw, what do I use to detect 32bit depth? Like for 16bit I use PF_WORLD_IS_DEEP, what do I use for 32bit?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Mar 27, 2025 Mar 27, 2025

this is how i check it out, in a rather generalized apporach (fitting for extnetion to premier where other pixel formats are used as well). pass the input or output buffer as an arg to PF_GetPixelFormat (after checking them out, of course).

PF_PixelFormat	pixelFormat;
AEGP_WorldType	worldType;

PF_WorldSuite2 *wsP = NULL;
ERR(suites.Pica()->AcquireSuite(kPFWorldSuite, kPFWorldSuiteVersion2, (const void**)&wsP));
ERR(wsP->PF_GetPixelFormat(output, &pixelFormat));
ERR(suites.Pica()->ReleaseSuite(kPFWorldSuite, kPFWorldSuiteVersion2));

switch (AEInf.renderInf.pixelFormat)
{
case PF_PixelFormat_ARGB128:
	worldType = AEGP_WorldType_32;
	break;
case PF_PixelFormat_ARGB64:
	worldType = AEGP_WorldType_16;
	break;
case PF_PixelFormat_ARGB32:
	worldType = AEGP_WorldType_8;
	break;
}

 

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 27, 2025 Mar 27, 2025

Thanks, that is actually extremely helpful.

 

Now another question, why do my plugins interact so weird with each other? What do I need to change? Like why does the Exposure/Gamma or the Directional Blur or the Pinch/Bulge modify the layer's position? Those should not interact with the position at all. They only do that when I add one of the first 3 effects, if I just add them individually, they don't interact with the position. What do I need to change to make them not interact with each other that way? 

 

(I hope I can at least upload screenshots)

 

Well if they don't get attached, the problem is that my plugins interact weird with each other. Plugins that normally do not change the layer's position, change it when I apply my transform plugins. 

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Mar 27, 2025 Mar 27, 2025

most likely it's because these effects expand or contract the output buffer, so 0,0 on the input buffer doesn't always mean 0,0 on the source layer.

either use the in_data->pre_effect_source_origin_x/y, or the PF_EffectWorld->origin_x/y to tell the offset from the source coordinates to the buffer coordinates.

 

while you're at it, read up on in_data->output_origin_x/y and in_data->downsample_x/y. you'll need to take them into account as well, to avoid similar oddities.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 27, 2025 Mar 27, 2025

Alright, thanks. Man this is so useful, all of my current questions were answered. I should be able to fix the plugins, should be able to code motion blur... I'll most probably still ask things in the next couple days tho lol.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 28, 2025 Mar 28, 2025

Ye, just got a new problem😭

 

I think I managed to properly access the layer's transforms, BUT the problem now is that once a frame gets processed as "motion" or "no motion", it remains that way permanently, it does not get re-processed. So basically if I add the plugin and I process the first 10 frames, for example, as "no motion", they remain that way even if I add keyframes. And the other way around.

 

From what the AI says the problem is with cached frames or something.

#include "MotionBlur.h"
#include "AE_Macros.h"
#include "AE_Effect.h"
#include "AE_EffectCB.h"
#include "AE_EffectSuites.h"
#include <math.h>
#include <fstream>
#include <ctime>
#include <string>

// Define M_PI if it's not defined
#ifndef M_PI
#define M_PI 3.14159265358979323846
#endif

#ifndef kPF_WorldTransformSuite
#define kPF_WorldTransformSuite "PF World Transform Suite"
#define kPF_WorldTransformSuiteVersion1 1
#endif

// Define PF_NewWorldFlag_DEEP if not defined
#ifndef PF_NewWorldFlag_DEEP
#define PF_NewWorldFlag_DEEP 1
#endif

// Add this near the top of your file
static AEGP_PluginID g_plugin_id = NULL;

// Structure to store layer transform data
typedef struct {
    PF_FpLong position_x;  // Position X
    PF_FpLong position_y;  // Position Y
    PF_FpLong scale_x;     // Scale X
    PF_FpLong scale_y;     // Scale Y
    PF_FpLong rotation;    // Rotation
} LayerTransformData;

// Modified MotionBlurSequenceData structure
typedef struct {
    LayerTransformData prev_transform;
    LayerTransformData curr_transform;
    PF_Boolean transform_changed;
    PF_FpLong motion_x;
    PF_FpLong motion_y;
    PF_FpLong scale_factor;
    PF_FpLong rotation_delta;
    // Add a counter for frames with movement
    A_long movement_frames_count;
    // Add a flag to indicate current frame has movement
    PF_Boolean current_frame_has_movement;
    // Add a timestamp for the last calculation
    A_long last_calculation_time;
    // Add a frame cache to track which frames have motion (up to 1000 frames)
    // NOTE: We'll still keep this for logging/debugging, but won't rely on it for motion detection
    PF_Boolean frame_has_motion[1000];
    // Track the last processed frame
    A_long last_processed_frame;
} MotionBlurSequenceData;

// Enhanced debug logging function
static void
LogMotionDetection(
    PF_InData* in_data,
    MotionBlurSequenceData* seqP,
    const A_Matrix4* prev_matrix,
    const A_Matrix4* curr_matrix,
    const A_Matrix4* next_matrix,
    bool forceLog = false)
{
    // Log even if transform didn't change when forceLog is true
    if (!seqP || (!seqP->transform_changed && !forceLog)) {
        return;
    }

    // Create log file path
    std::string logFilePath = "D:\\MotionBlurDebug.log";

    // Open log file in append mode
    std::ofstream logFile(logFilePath.c_str(), std::ios::app);

    if (!logFile.is_open()) {
        return; // Failed to open file
    }

    // Get current time for log timestamp
    time_t now = time(0);
    char timeStr[26];
    ctime_s(timeStr, sizeof(timeStr), &now);
    timeStr[24] = '\0'; // Remove newline

    // Write log header with timestamp and frame info
    logFile << "==== Motion Detection Log: " << timeStr << " ====" << std::endl;
    logFile << "Frame: " << in_data->current_time << "/" << in_data->time_scale << std::endl;
    logFile << "Total Frames with Movement: " << seqP->movement_frames_count << std::endl;

    // Log motion values
    logFile << "Motion X: " << seqP->motion_x << " pixels/frame" << std::endl;
    logFile << "Motion Y: " << seqP->motion_y << " pixels/frame" << std::endl;
    logFile << "Scale Factor: " << seqP->scale_factor << " change/frame" << std::endl;
    logFile << "Rotation Delta: " << seqP->rotation_delta << " degrees/frame" << std::endl;

    // Log motion magnitude
    PF_FpLong motionMagnitude = sqrt(seqP->motion_x * seqP->motion_x + seqP->motion_y * seqP->motion_y);
    logFile << "Motion Magnitude: " << motionMagnitude << " pixels/frame" << std::endl;

    // Log transform status
    logFile << "Transform Changed: " << (seqP->transform_changed ? "YES" : "NO") << std::endl;

    // Log threshold values
    logFile << "Position Threshold: " << 0.1 << " pixels/frame" << std::endl;
    logFile << "Scale Threshold: " << 0.001 << " change/frame" << std::endl;
    logFile << "Rotation Threshold: " << 0.1 << " degrees/frame" << std::endl;

    // Log matrix data if available
    if (prev_matrix && curr_matrix && next_matrix) {
        logFile << "\nMatrix Data:" << std::endl;

        // Log current matrix
        logFile << "Current Matrix:" << std::endl;
        for (int i = 0; i < 4; i++) {
            logFile << "  [";
            for (int j = 0; j < 4; j++) {
                logFile << curr_matrix->mat[i][j];
                if (j < 3) logFile << ", ";
            }
            logFile << "]" << std::endl;
        }

        // Log position data from matrices
        logFile << "\nPosition from matrices:" << std::endl;
        logFile << "  Prev: [" << prev_matrix->mat[3][0] << ", " << prev_matrix->mat[3][1] << ", " << prev_matrix->mat[3][2] << "]" << std::endl;
        logFile << "  Curr: [" << curr_matrix->mat[3][0] << ", " << curr_matrix->mat[3][1] << ", " << curr_matrix->mat[3][2] << "]" << std::endl;
        logFile << "  Next: [" << next_matrix->mat[3][0] << ", " << next_matrix->mat[3][1] << ", " << next_matrix->mat[3][2] << "]" << std::endl;
    }

    // Add separator for readability
    logFile << "----------------------------------------" << std::endl;

    // Close the log file
    logFile.close();
}

static PF_Err
About(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    suites.ANSICallbacksSuite1()->sprintf(out_data->return_msg,
        "%s v%d.%d\r%s",
        STR_NAME,
        MAJOR_VERSION,
        MINOR_VERSION,
        STR_DESCRIPTION);
    return PF_Err_NONE;
}

static PF_Err
GlobalSetup(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    // Use the exact version number from the PiPL file (524288 = 0x80000)
    out_data->my_version = 524288; // Version 1.0 as defined in PiPL

    out_data->out_flags = PF_OutFlag_DEEP_COLOR_AWARE;    // just 16bpc, not 32bpc
    out_data->out_flags |= PF_OutFlag_I_USE_SHUTTER_ANGLE; // Tell AE we need shutter angle info
    out_data->out_flags |= PF_OutFlag_WIDE_TIME_INPUT;    // We need to check parameters at different times
    out_data->out_flags2 = PF_OutFlag2_NONE;

    return PF_Err_NONE;
}

static PF_Err
ParamsSetup(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    PF_Err        err = PF_Err_NONE;
    PF_ParamDef    def;

    AEFX_CLR_STRUCT(def);

    // Add Tune slider
    PF_ADD_FLOAT_SLIDERX(STR_TUNE_NAME,
        0,
        4,
        0,
        4,
        1,
        PF_Precision_INTEGER,
        0,
        0,
        TUNE_DISK_ID);

    // Add Position checkbox
    AEFX_CLR_STRUCT(def);
    PF_ADD_CHECKBOX(STR_POSITION_NAME,
        "On",
        TRUE,
        0,
        POSITION_DISK_ID);

    // Add Scale checkbox
    AEFX_CLR_STRUCT(def);
    PF_ADD_CHECKBOX(STR_SCALE_NAME,
        "On",
        TRUE,
        0,
        SCALE_DISK_ID);

    // Add Angle checkbox
    AEFX_CLR_STRUCT(def);
    PF_ADD_CHECKBOX(STR_ANGLE_NAME,
        "On",
        TRUE,
        0,
        ANGLE_DISK_ID);

    out_data->num_params = MOTIONBLUR_NUM_PARAMS;

    return err;
}

// Function to get pixel value with bounds checking (16-bit)
static PF_Err
GetPixel16(
    PF_EffectWorld* input,
    int              x,
    int              y,
    PF_Pixel16* outP)
{
    PF_Err err = PF_Err_NONE;

    // Bounds checking
    if (x < 0) x = 0;
    if (y < 0) y = 0;
    if (x >= input->width) x = input->width - 1;
    if (y >= input->height) y = input->height - 1;

    // Get the pixel
    PF_Pixel16* pixelP = (PF_Pixel16*)((char*)input->data +
        (y * input->rowbytes) +
        (x * sizeof(PF_Pixel16)));

    *outP = *pixelP;

    return err;
}

// Function to get pixel value with bounds checking (8-bit)
static PF_Err
GetPixel8(
    PF_EffectWorld* input,
    int              x,
    int              y,
    PF_Pixel8* outP)
{
    PF_Err err = PF_Err_NONE;

    // Bounds checking
    if (x < 0) x = 0;
    if (y < 0) y = 0;
    if (x >= input->width) x = input->width - 1;
    if (y >= input->height) y = input->height - 1;

    // Get the pixel
    PF_Pixel8* pixelP = (PF_Pixel8*)((char*)input->data +
        (y * input->rowbytes) +
        (x * sizeof(PF_Pixel8)));

    *outP = *pixelP;

    return err;
}

// Helper function to extract rotation from a matrix
PF_FpLong extract_rotation(const A_Matrix4& matrix, PF_FpLong scale_x, PF_FpLong scale_y) {
    // Use atan2 on the normalized first column to get rotation
    if (scale_x > 0.0001) {
        PF_FpLong n00 = matrix.mat[0][0] / scale_x;
        PF_FpLong n01 = matrix.mat[0][1] / scale_x;
        return atan2(n01, n00) * 180.0 / M_PI;
    }
    return 0.0;
}

// Modified CalculateTransformChanges to properly recalculate motion state every time
static PF_Err
CalculateTransformChanges(
    PF_InData* in_data,
    MotionBlurSequenceData* seqP)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    // Get current frame number (normalized to 0-999 range for our cache)
    A_long current_frame = (in_data->current_time / in_data->time_step) % 1000;

    // Store previous motion state for this frame
    bool previous_state = seqP->frame_has_motion[current_frame];

    // IMPORTANT FIX: Always reset motion values at the start of calculation
    // This ensures we don't use stale values if no motion is detected
    seqP->motion_x = 0;
    seqP->motion_y = 0;
    seqP->scale_factor = 0;
    seqP->rotation_delta = 0;
    seqP->transform_changed = false;
    seqP->current_frame_has_movement = false;

    // Get the PF Interface Suite directly
    AEGP_PFInterfaceSuite1* pfInterfaceSuite = suites.PFInterfaceSuite1();

    if (pfInterfaceSuite) {
        // Get the layer handle for the current effect
        AEGP_LayerH layerH = NULL;
        err = pfInterfaceSuite->AEGP_GetEffectLayer(in_data->effect_ref, &layerH);

        if (!err && layerH) {
            // Get the LayerSuite directly
            AEGP_LayerSuite9* layerSuite = suites.LayerSuite9();

            if (layerSuite) {
                // Get current time
                A_Time current_time;
                current_time.value = in_data->current_time;
                current_time.scale = in_data->time_scale;

                // Calculate previous and next times for velocity calculation
                A_Time prev_time, next_time;
                prev_time.scale = next_time.scale = in_data->time_scale;

                // Use a full frame step for detecting motion
                A_long time_step = in_data->time_step;

                prev_time.value = in_data->current_time - time_step;
                next_time.value = in_data->current_time + time_step;

                // Get layer transform at previous, current, and next times
                A_Matrix4 prev_matrix, curr_matrix, next_matrix;

                // Try to get the Matrices - passing NULL for plugin_id as advised
                A_Err err1 = layerSuite->AEGP_GetLayerToWorldXform(layerH, &prev_time, &prev_matrix);
                A_Err err2 = layerSuite->AEGP_GetLayerToWorldXform(layerH, &current_time, &curr_matrix);
                A_Err err3 = layerSuite->AEGP_GetLayerToWorldXform(layerH, &next_time, &next_matrix);

                if (!err1 && !err2 && !err3) {
                    // Extract position, scale, and rotation from matrices
                    LayerTransformData prev_transform, curr_transform, next_transform;

                    // Position is in the last column of the matrix
                    prev_transform.position_x = prev_matrix.mat[3][0];
                    prev_transform.position_y = prev_matrix.mat[3][1];
                    curr_transform.position_x = curr_matrix.mat[3][0];
                    curr_transform.position_y = curr_matrix.mat[3][1];
                    next_transform.position_x = next_matrix.mat[3][0];
                    next_transform.position_y = next_matrix.mat[3][1];

                    // Extract scale from matrices
                    prev_transform.scale_x = sqrt(prev_matrix.mat[0][0] * prev_matrix.mat[0][0] +
                        prev_matrix.mat[0][1] * prev_matrix.mat[0][1]);
                    prev_transform.scale_y = sqrt(prev_matrix.mat[1][0] * prev_matrix.mat[1][0] +
                        prev_matrix.mat[1][1] * prev_matrix.mat[1][1]);

                    curr_transform.scale_x = sqrt(curr_matrix.mat[0][0] * curr_matrix.mat[0][0] +
                        curr_matrix.mat[0][1] * curr_matrix.mat[0][1]);
                    curr_transform.scale_y = sqrt(curr_matrix.mat[1][0] * curr_matrix.mat[1][0] +
                        curr_matrix.mat[1][1] * curr_matrix.mat[1][1]);

                    next_transform.scale_x = sqrt(next_matrix.mat[0][0] * next_matrix.mat[0][0] +
                        next_matrix.mat[0][1] * next_matrix.mat[0][1]);
                    next_transform.scale_y = sqrt(next_matrix.mat[1][0] * next_matrix.mat[1][0] +
                        next_matrix.mat[1][1] * next_matrix.mat[1][1]);

                    // Extract rotation from matrices
                    prev_transform.rotation = extract_rotation(prev_matrix, prev_transform.scale_x, prev_transform.scale_y);
                    curr_transform.rotation = extract_rotation(curr_matrix, curr_transform.scale_x, curr_transform.scale_y);
                    next_transform.rotation = extract_rotation(next_matrix, next_transform.scale_x, next_transform.scale_y);

                    // Calculate velocities (change per frame)
                    PF_FpLong time_factor = (PF_FpLong)in_data->time_scale / time_step;

                    // Position velocity (pixels per frame)
                    seqP->motion_x = (curr_transform.position_x - prev_transform.position_x) * time_factor;
                    seqP->motion_y = (curr_transform.position_y - prev_transform.position_y) * time_factor;

                    // Scale velocity (change per frame)
                    PF_FpLong scale_x_vel = (curr_transform.scale_x - prev_transform.scale_x) * time_factor;
                    PF_FpLong scale_y_vel = (curr_transform.scale_y - prev_transform.scale_y) * time_factor;

                    // Use average of X and Y scale velocity, normalized by current scale
                    PF_FpLong avg_scale = (curr_transform.scale_x + curr_transform.scale_y) / 2.0;
                    if (avg_scale > 0.0001) {
                        seqP->scale_factor = (scale_x_vel + scale_y_vel) / (2.0 * avg_scale);
                    }
                    else {
                        seqP->scale_factor = 0;
                    }

                    // Rotation velocity (degrees per frame)
                    // Need to handle wraparound for rotation
                    PF_FpLong curr_rot = curr_transform.rotation;
                    PF_FpLong prev_rot = prev_transform.rotation;

                    // Normalize rotation difference to be in range [-180, 180]
                    PF_FpLong rot_diff = curr_rot - prev_rot;
                    while (rot_diff > 180.0) rot_diff -= 360.0;
                    while (rot_diff < -180.0) rot_diff += 360.0;

                    seqP->rotation_delta = rot_diff * time_factor;

                    // Determine if there's enough motion to apply blur
                    PF_FpLong pos_speed = sqrt(seqP->motion_x * seqP->motion_x +
                        seqP->motion_y * seqP->motion_y);

                    bool position_changed = (pos_speed > 0.1);  // 0.1 pixels per frame
                    bool scale_changed = (fabs(seqP->scale_factor) > 0.001);  // 0.1% per frame
                    bool rotation_changed = (fabs(seqP->rotation_delta) > 0.1);  // 0.1 degrees per frame

                    // CRITICAL FIX: Always set transform_changed based on current motion detection
                    // Don't rely on previously cached values
                    seqP->current_frame_has_movement = position_changed || scale_changed || rotation_changed;
                    seqP->transform_changed = seqP->current_frame_has_movement;

                    // Update the frame cache for this frame
                    seqP->frame_has_motion[current_frame] = seqP->transform_changed;

                    // Log motion detection information to file
                    LogMotionDetection(in_data, seqP, &prev_matrix, &curr_matrix, &next_matrix, true);

                    // CRITICAL FIX: Increment counter if this frame now has motion and either:
                    // 1. It's a new frame we haven't processed before, or
                    // 2. It previously didn't have motion but now does
                    if (seqP->transform_changed &&
                        (seqP->last_processed_frame != current_frame || !previous_state)) {
                        seqP->movement_frames_count++;
                    }

                    // Store current transform for next time
                    seqP->curr_transform = curr_transform;
                    seqP->prev_transform = prev_transform;
                }
                else {
                    // Log error
                    std::string logFilePath = "D:\\MotionBlurDebug.log";
                    std::ofstream logFile(logFilePath.c_str(), std::ios::app);
                    if (logFile.is_open()) {
                        logFile << "AEGP_GetLayerToWorldXform ERROR: "
                            << "prev=" << err1 << ", curr=" << err2 << ", next=" << err3
                            << " at frame " << current_frame << std::endl;
                        logFile.close();
                    }

                    // Report error
                    suites.UtilitySuite6()->AEGP_ReportInfo(NULL, "AEGP_GetLayerToWorldXform FAILED!");
                }

                // Update the timestamp and last processed frame
                seqP->last_calculation_time = in_data->current_time;
                seqP->last_processed_frame = current_frame;
            }
        }
    }

    return err;
}


static void
ClearMotionBlurState(MotionBlurSequenceData* seqP)
{
    // Reset all motion values to zero
    seqP->motion_x = 0;
    seqP->motion_y = 0;
    seqP->scale_factor = 0;
    seqP->rotation_delta = 0;

    // Reset flags
    seqP->transform_changed = false;
    seqP->current_frame_has_movement = false;
}

// Modified MotionBlurFunc16 function to always use current motion state
static PF_Err
MotionBlurFunc16(
    void* refcon,
    A_long        xL,
    A_long        yL,
    PF_Pixel16* inP,
    PF_Pixel16* outP)
{
    PF_Err err = PF_Err_NONE;

    MotionBlurInfo* mbiP = reinterpret_cast<MotionBlurInfo*>(refcon);
    if (!mbiP || !mbiP->input) {
        *outP = *inP; // Safety check
        return err;
    }

    // Initialize output with current pixel - always do this first
    *outP = *inP;

    // Skip blur if tune is zero
    if (mbiP->tuneF <= 0) {
        return err;
    }

    // Check if sequence data is available
    if (mbiP->sequence_data) {
        AEGP_SuiteHandler suites(mbiP->in_data->pica_basicP);

        // Lock handle to access sequence data
        MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
            suites.HandleSuite1()->host_lock_handle(mbiP->sequence_data));

        if (seqP) {
            // Add a red box in top-left corner ONLY if movement is detected in this frame
            bool in_movement_indicator_zone = (xL < 50 && yL < 50);

            if (in_movement_indicator_zone && seqP->transform_changed) {
                outP->red = 32768;    // Red (16-bit value)
                outP->green = 0;
                outP->blue = 0;
                outP->alpha = 32768;  // Full opacity (16-bit value)

                // Unlock handle before returning
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // CRITICAL FIX: ONLY apply blur if transform_changed is true for the current frame
            // Don't rely on cached frame_has_motion values
            if (!seqP->transform_changed) {
                // No movement in current frame, just return the original pixel
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // Motion values must be non-zero to apply blur
            bool has_position_motion = (fabs(seqP->motion_x) > 0.1 || fabs(seqP->motion_y) > 0.1);
            bool has_scale_motion = (fabs(seqP->scale_factor) > 0.001);
            bool has_rotation_motion = (fabs(seqP->rotation_delta) > 0.1);

            // If no actual motion values, don't apply blur
            if (!has_position_motion && !has_scale_motion && !has_rotation_motion) {
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // Calculate center of layer
            PF_FpLong center_x = mbiP->input->width / 2.0;
            PF_FpLong center_y = mbiP->input->height / 2.0;

            // Calculate pixel position relative to center
            PF_FpLong dx = xL - center_x;
            PF_FpLong dy = yL - center_y;

            // Initialize accumulators for the final color
            A_long redSum = inP->red;
            A_long greenSum = inP->green;
            A_long blueSum = inP->blue;
            A_long alphaSum = inP->alpha;
            int totalSamples = 1;  // Start with 1 for the original pixel

            // Position-based motion blur
            if (mbiP->position && has_position_motion) {
                // Get motion velocity scaled by tune parameter
                PF_FpLong vel_x = seqP->motion_x * mbiP->tuneF;
                PF_FpLong vel_y = seqP->motion_y * mbiP->tuneF;

                // Calculate speed in pixels per frame
                PF_FpLong speed = sqrt(vel_x * vel_x + vel_y * vel_y);

                // Calculate blur length based on shutter angle
                PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                PF_FpLong blur_length = speed * shutter_factor;

                // Determine number of samples based on blur length
                int nSamples = (int)MIN(MAX(blur_length * 2.0, 1.0), 100.0);

                if (nSamples > 1) {
                    // Sample along the motion vector
                    for (int i = 1; i < nSamples; i++) {
                        // Calculate offset: (i/(nSamples-1) - 0.5) * blur_length
                        PF_FpLong fraction = (PF_FpLong)i / (nSamples - 1) - 0.5;
                        PF_FpLong offset_x = vel_x * fraction * shutter_factor;
                        PF_FpLong offset_y = vel_y * fraction * shutter_factor;

                        // Calculate sample position
                        int sampleX = xL - (int)offset_x;
                        int sampleY = yL - (int)offset_y;

                        // Sample pixel with bounds checking
                        PF_Pixel16 sampledPixel;
                        GetPixel16(mbiP->input, sampleX, sampleY, &sampledPixel);

                        // Accumulate
                        redSum += sampledPixel.red;
                        greenSum += sampledPixel.green;
                        blueSum += sampledPixel.blue;
                        alphaSum += sampledPixel.alpha;
                        totalSamples++;
                    }
                }
            }

            // Scale-based motion blur
            if (mbiP->scale && has_scale_motion) {
                // Calculate distance from center
                PF_FpLong distance = sqrt(dx * dx + dy * dy);

                if (distance > 0.1) {
                    // Normalize vector from center to pixel
                    PF_FpLong dir_x = dx / distance;
                    PF_FpLong dir_y = dy / distance;

                    // Calculate scale velocity
                    PF_FpLong scale_vel = seqP->scale_factor * mbiP->tuneF;

                    // Calculate blur length based on distance and scale velocity
                    PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                    PF_FpLong blur_length = distance * fabs(scale_vel) * shutter_factor;

                    // Determine number of samples
                    int nSamples = (int)MIN(MAX(blur_length / 5.0, 1.0), 100.0);

                    if (nSamples > 1) {
                        // Sample along the radial direction
                        for (int i = 1; i < nSamples; i++) {
                            // Calculate offset: (i/(nSamples-1) - 0.5) * blur_length
                            PF_FpLong fraction = (PF_FpLong)i / (nSamples - 1) - 0.5;
                            PF_FpLong offset_magnitude = distance * scale_vel * fraction * shutter_factor;

                            // Calculate sample position
                            int sampleX = xL - (int)(dir_x * offset_magnitude);
                            int sampleY = yL - (int)(dir_y * offset_magnitude);

                            // Sample pixel with bounds checking
                            PF_Pixel16 sampledPixel;
                            GetPixel16(mbiP->input, sampleX, sampleY, &sampledPixel);

                            // Accumulate
                            redSum += sampledPixel.red;
                            greenSum += sampledPixel.green;
                            blueSum += sampledPixel.blue;
                            alphaSum += sampledPixel.alpha;
                            totalSamples++;
                        }
                    }
                }
            }

            // Rotation-based motion blur
            if (mbiP->angle && has_rotation_motion) {
                // Calculate distance from center
                PF_FpLong distance = sqrt(dx * dx + dy * dy);

                if (distance > 0.1) {
                    // Calculate rotation velocity in radians
                    PF_FpLong rot_vel = seqP->rotation_delta * M_PI / 180.0 * mbiP->tuneF;

                    // Calculate blur angle based on rotation velocity and shutter angle
                    PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                    PF_FpLong blur_angle = rot_vel * shutter_factor;

                    // Calculate arc length
                    PF_FpLong arc_length = distance * fabs(blur_angle);

                    // Determine number of samples
                    int nSamples = (int)MIN(MAX(arc_length / 5.0, 1.0), 100.0);

                    if (nSamples > 1) {
                        // Calculate half angle for symmetric sampling
                        PF_FpLong half_angle = blur_angle / 2.0;

                        // Sample along the arc
                        for (int i = 1; i < nSamples; i++) {
                            // Calculate sample angle
                            PF_FpLong sample_angle = -half_angle + blur_angle * (PF_FpLong)i / (PF_FpLong)nSamples;

                            // Calculate rotation matrix components
                            PF_FpLong sin_a = sin(sample_angle);
                            PF_FpLong cos_a = cos(sample_angle);

                            // Rotate point around center
                            PF_FpLong rotX = dx * cos_a - dy * sin_a;
                            PF_FpLong rotY = dx * sin_a + dy * cos_a;

                            // Calculate sample position
                            int sampleX = (int)(center_x + rotX);
                            int sampleY = (int)(center_y + rotY);

                            // Sample pixel with bounds checking
                            PF_Pixel16 sampledPixel;
                            GetPixel16(mbiP->input, sampleX, sampleY, &sampledPixel);

                            // Accumulate
                            redSum += sampledPixel.red;
                            greenSum += sampledPixel.green;
                            blueSum += sampledPixel.blue;
                            alphaSum += sampledPixel.alpha;
                            totalSamples++;
                        }
                    }
                }
            }

            // Average the accumulated values
            if (totalSamples > 1) {
                outP->red = (A_u_short)(redSum / totalSamples);
                outP->green = (A_u_short)(greenSum / totalSamples);
                outP->blue = (A_u_short)(blueSum / totalSamples);
                outP->alpha = (A_u_short)(alphaSum / totalSamples);
            }

            // Unlock handle
            suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
        }
    }

    return err;
}

// Modified MotionBlurFunc8 function to always use current motion state
static PF_Err
MotionBlurFunc8(
    void* refcon,
    A_long        xL,
    A_long        yL,
    PF_Pixel8* inP,
    PF_Pixel8* outP)
{
    PF_Err err = PF_Err_NONE;

    MotionBlurInfo* mbiP = reinterpret_cast<MotionBlurInfo*>(refcon);
    if (!mbiP || !mbiP->input) {
        *outP = *inP; // Safety check
        return err;
    }

    // Initialize output with current pixel - always do this first
    *outP = *inP;

    // Skip blur if tune is zero
    if (mbiP->tuneF <= 0) {
        return err;
    }

    // Check if sequence data is available
    if (mbiP->sequence_data) {
        AEGP_SuiteHandler suites(mbiP->in_data->pica_basicP);

        // Lock handle to access sequence data
        MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
            suites.HandleSuite1()->host_lock_handle(mbiP->sequence_data));

        if (seqP) {
            // Add a red box in top-left corner ONLY if movement is detected in this frame
            bool in_movement_indicator_zone = (xL < 50 && yL < 50);

            if (in_movement_indicator_zone && seqP->transform_changed) {
                outP->red = 255;    // Red (8-bit value)
                outP->green = 0;
                outP->blue = 0;
                outP->alpha = 255;  // Full opacity (8-bit value)

                // Unlock handle before returning
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // CRITICAL FIX: ONLY apply blur if transform_changed is true for the current frame
            // Don't rely on cached frame_has_motion values
            if (!seqP->transform_changed) {
                // No movement in current frame, just return the original pixel
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // Motion values must be non-zero to apply blur
            bool has_position_motion = (fabs(seqP->motion_x) > 0.1 || fabs(seqP->motion_y) > 0.1);
            bool has_scale_motion = (fabs(seqP->scale_factor) > 0.001);
            bool has_rotation_motion = (fabs(seqP->rotation_delta) > 0.1);

            // If no actual motion values, don't apply blur
            if (!has_position_motion && !has_scale_motion && !has_rotation_motion) {
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // Calculate center of layer
            PF_FpLong center_x = mbiP->input->width / 2.0;
            PF_FpLong center_y = mbiP->input->height / 2.0;

            // Calculate pixel position relative to center
            PF_FpLong dx = xL - center_x;
            PF_FpLong dy = yL - center_y;

            // Initialize accumulators for the final color
            A_long redSum = inP->red;
            A_long greenSum = inP->green;
            A_long blueSum = inP->blue;
            A_long alphaSum = inP->alpha;
            int totalSamples = 1;  // Start with 1 for the original pixel

            // Position-based motion blur
            if (mbiP->position && has_position_motion) {
                // Get motion velocity scaled by tune parameter
                PF_FpLong vel_x = seqP->motion_x * mbiP->tuneF;
                PF_FpLong vel_y = seqP->motion_y * mbiP->tuneF;

                // Calculate speed in pixels per frame
                PF_FpLong speed = sqrt(vel_x * vel_x + vel_y * vel_y);

                // Calculate blur length based on shutter angle
                PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                PF_FpLong blur_length = speed * shutter_factor;

                // Determine number of samples based on blur length
                int nSamples = (int)MIN(MAX(blur_length * 2.0, 1.0), 30.0);

                if (nSamples > 1) {
                    // Sample along the motion vector
                    for (int i = 1; i < nSamples; i++) {
                        // Calculate offset: (i/(nSamples-1) - 0.5) * blur_length
                        PF_FpLong fraction = (PF_FpLong)i / (nSamples - 1) - 0.5;
                        PF_FpLong offset_x = vel_x * fraction * shutter_factor;
                        PF_FpLong offset_y = vel_y * fraction * shutter_factor;

                        // Calculate sample position
                        int sampleX = xL - (int)offset_x;
                        int sampleY = yL - (int)offset_y;

                        // Sample pixel with bounds checking
                        PF_Pixel8 sampledPixel;
                        GetPixel8(mbiP->input, sampleX, sampleY, &sampledPixel);

                        // Accumulate
                        redSum += sampledPixel.red;
                        greenSum += sampledPixel.green;
                        blueSum += sampledPixel.blue;
                        alphaSum += sampledPixel.alpha;
                        totalSamples++;
                    }
                }
            }

            // Scale-based motion blur
            if (mbiP->scale && has_scale_motion) {
                // Calculate distance from center
                PF_FpLong distance = sqrt(dx * dx + dy * dy);

                if (distance > 0.1) {
                    // Normalize vector from center to pixel
                    PF_FpLong dir_x = dx / distance;
                    PF_FpLong dir_y = dy / distance;

                    // Calculate scale velocity
                    PF_FpLong scale_vel = seqP->scale_factor * mbiP->tuneF;

                    // Calculate blur length based on distance and scale velocity
                    PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                    PF_FpLong blur_length = distance * fabs(scale_vel) * shutter_factor;

                    // Determine number of samples
                    int nSamples = (int)MIN(MAX(blur_length / 5.0, 1.0), 30.0);

                    if (nSamples > 1) {
                        // Sample along the radial direction
                        for (int i = 1; i < nSamples; i++) {
                            // Calculate offset: (i/(nSamples-1) - 0.5) * blur_length
                            PF_FpLong fraction = (PF_FpLong)i / (nSamples - 1) - 0.5;
                            PF_FpLong offset_magnitude = distance * scale_vel * fraction * shutter_factor;

                            // Calculate sample position
                            int sampleX = xL - (int)(dir_x * offset_magnitude);
                            int sampleY = yL - (int)(dir_y * offset_magnitude);

                            // Sample pixel with bounds checking
                            PF_Pixel8 sampledPixel;
                            GetPixel8(mbiP->input, sampleX, sampleY, &sampledPixel);

                            // Accumulate
                            redSum += sampledPixel.red;
                            greenSum += sampledPixel.green;
                            blueSum += sampledPixel.blue;
                            alphaSum += sampledPixel.alpha;
                            totalSamples++;
                        }
                    }
                }
            }

            // Rotation-based motion blur
            if (mbiP->angle && has_rotation_motion) {
                // Calculate distance from center
                PF_FpLong distance = sqrt(dx * dx + dy * dy);

                if (distance > 0.1) {
                    // Calculate rotation velocity in radians
                    PF_FpLong rot_vel = seqP->rotation_delta * M_PI / 180.0 * mbiP->tuneF;

                    // Calculate blur angle based on rotation velocity and shutter angle
                    PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                    PF_FpLong blur_angle = rot_vel * shutter_factor;

                    // Calculate arc length
                    PF_FpLong arc_length = distance * fabs(blur_angle);

                    // Determine number of samples
                    int nSamples = (int)MIN(MAX(arc_length / 10.0, 1.0), 30.0);

                    if (nSamples > 1) {
                        // Calculate half angle for symmetric sampling
                        PF_FpLong half_angle = blur_angle / 2.0;

                        // Sample along the arc
                        for (int i = 1; i < nSamples; i++) {
                            // Calculate sample angle
                            PF_FpLong sample_angle = -half_angle + blur_angle * (PF_FpLong)i / (PF_FpLong)nSamples;

                            // Calculate rotation matrix components
                            PF_FpLong sin_a = sin(sample_angle);
                            PF_FpLong cos_a = cos(sample_angle);

                            // Rotate point around center
                            PF_FpLong rotX = dx * cos_a - dy * sin_a;
                            PF_FpLong rotY = dx * sin_a + dy * cos_a;

                            // Calculate sample position
                            int sampleX = (int)(center_x + rotX);
                            int sampleY = (int)(center_y + rotY);

                            // Sample pixel with bounds checking
                            PF_Pixel8 sampledPixel;
                            GetPixel8(mbiP->input, sampleX, sampleY, &sampledPixel);

                            // Accumulate
                            redSum += sampledPixel.red;
                            greenSum += sampledPixel.green;
                            blueSum += sampledPixel.blue;
                            alphaSum += sampledPixel.alpha;
                            totalSamples++;
                        }
                    }
                }
            }

            // Average the accumulated values
            if (totalSamples > 1) {
                outP->red = (A_u_char)(redSum / totalSamples);
                outP->green = (A_u_char)(greenSum / totalSamples);
                outP->blue = (A_u_char)(blueSum / totalSamples);
                outP->alpha = (A_u_char)(alphaSum / totalSamples);
            }

            // Unlock handle
            suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
        }
    }

    return err;
}

// Modified ResetSequenceData function to properly reset all state
static PF_Err
ResetSequenceData(PF_InData* in_data)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    if (in_data->sequence_data) {
        MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
            suites.HandleSuite1()->host_lock_handle(in_data->sequence_data));

        if (seqP) {
            // Reset all motion values
            seqP->motion_x = 0;
            seqP->motion_y = 0;
            seqP->scale_factor = 0;
            seqP->rotation_delta = 0;

            // Reset flags
            seqP->transform_changed = false;
            seqP->current_frame_has_movement = false;

            // Reset prev and curr transforms to identity
            seqP->prev_transform.position_x = 0;
            seqP->prev_transform.position_y = 0;
            seqP->prev_transform.scale_x = 1.0;
            seqP->prev_transform.scale_y = 1.0;
            seqP->prev_transform.rotation = 0;

            seqP->curr_transform.position_x = 0;
            seqP->curr_transform.position_y = 0;
            seqP->curr_transform.scale_x = 1.0;
            seqP->curr_transform.scale_y = 1.0;
            seqP->curr_transform.rotation = 0;

            // Reset frame cache - clear all frame motion flags
            for (int i = 0; i < 1000; i++) {
                seqP->frame_has_motion[i] = false;
            }

            // Reset counters
            seqP->movement_frames_count = 0;
            seqP->last_calculation_time = 0;
            seqP->last_processed_frame = -1;

            // Log reset
            std::string logFilePath = "D:\\MotionBlurDebug.log";
            std::ofstream logFile(logFilePath.c_str(), std::ios::app);
            if (logFile.is_open()) {
                // Get current time for log timestamp
                time_t now = time(0);
                char timeStr[26];
                ctime_s(timeStr, sizeof(timeStr), &now);
                timeStr[24] = '\0'; // Remove newline

                logFile << "==== SEQUENCE DATA RESET: " << timeStr << " ====" << std::endl;
                logFile << "----------------------------------------" << std::endl;
                logFile.close();
            }

            // Unlock handle
            suites.HandleSuite1()->host_unlock_handle(in_data->sequence_data);
        }
    }

    return err;
}

// Modify SequenceSetup to initialize sequence data properly
static PF_Err
SequenceSetup(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    // Allocate sequence data
    PF_Handle seqH = suites.HandleSuite1()->host_new_handle(sizeof(MotionBlurSequenceData));

    if (!seqH) {
        return PF_Err_OUT_OF_MEMORY;
    }

    // Lock handle and initialize sequence data
    MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
        suites.HandleSuite1()->host_lock_handle(seqH));

    if (!seqP) {
        suites.HandleSuite1()->host_dispose_handle(seqH);
        return PF_Err_OUT_OF_MEMORY;
    }

    // Initialize sequence data
    memset(seqP, 0, sizeof(MotionBlurSequenceData));

    // Set default values for transform data
    seqP->prev_transform.scale_x = 1.0;
    seqP->prev_transform.scale_y = 1.0;
    seqP->curr_transform.scale_x = 1.0;
    seqP->curr_transform.scale_y = 1.0;

    // Initialize frame cache - set all frames to no motion
    for (int i = 0; i < 1000; i++) {
        seqP->frame_has_motion[i] = false;
    }

    // Initialize movement frames counter
    seqP->movement_frames_count = 0;
    seqP->current_frame_has_movement = false;
    seqP->last_calculation_time = 0;
    seqP->last_processed_frame = -1;

    // Unlock handle
    suites.HandleSuite1()->host_unlock_handle(seqH);

    // Store sequence data
    out_data->sequence_data = seqH;

    return err;
}

// Sequence setup down to free sequence data
static PF_Err
SequenceSetdown(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    // Log final count of frames with movement
    if (in_data->sequence_data) {
        MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
            suites.HandleSuite1()->host_lock_handle(in_data->sequence_data));

        if (seqP) {
            // Create log file path
            std::string logFilePath = "D:\\MotionBlurDebug.log";

            // Open log file in append mode
            std::ofstream logFile(logFilePath.c_str(), std::ios::app);

            if (logFile.is_open()) {
                // Get current time for log timestamp
                time_t now = time(0);
                char timeStr[26];
                ctime_s(timeStr, sizeof(timeStr), &now);
                timeStr[24] = '\0'; // Remove newline

                // Write summary log
                logFile << "==== SEQUENCE COMPLETE: " << timeStr << " ====" << std::endl;
                logFile << "Total Frames with Movement: " << seqP->movement_frames_count << std::endl;
                logFile << "----------------------------------------" << std::endl << std::endl;

                // Close the log file
                logFile.close();
            }

            // Unlock handle
            suites.HandleSuite1()->host_unlock_handle(in_data->sequence_data);
        }

        // Free sequence data
        suites.HandleSuite1()->host_dispose_handle(in_data->sequence_data);
    }

    return err;
}

// In the FrameSetup function:

static PF_Err
FrameSetup(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    // Get sequence data
    if (in_data->sequence_data) {
        MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
            suites.HandleSuite1()->host_lock_handle(in_data->sequence_data));

        if (seqP) {
            // Get current frame number
            A_long current_frame = (in_data->current_time / in_data->time_step) % 1000;

            // CRITICAL FIX: Reset the frame cache for the current frame
            // This ensures we always recalculate motion for every frame, even if processed before
            seqP->frame_has_motion[current_frame] = false;

            // Reset motion flags at the start of each frame
            seqP->current_frame_has_movement = false;
            seqP->transform_changed = false;

            // Reset motion values to zero at the start of each frame
            seqP->motion_x = 0;
            seqP->motion_y = 0;
            seqP->scale_factor = 0;
            seqP->rotation_delta = 0;

            // Always recalculate transform changes for each frame
            err = CalculateTransformChanges(in_data, seqP);

            // Unlock handle
            suites.HandleSuite1()->host_unlock_handle(in_data->sequence_data);
        }
    }

    return err;
}


static PF_Err
Render(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    MotionBlurInfo mbiP;
    AEFX_CLR_STRUCT(mbiP);

    // Set up info structure
    mbiP.tuneF = params[MOTIONBLUR_TUNE]->u.fs_d.value;
    mbiP.position = params[MOTIONBLUR_POSITION]->u.bd.value;
    mbiP.scale = params[MOTIONBLUR_SCALE]->u.bd.value;
    mbiP.angle = params[MOTIONBLUR_ANGLE]->u.bd.value;
    mbiP.input = &params[MOTIONBLUR_INPUT]->u.ld;
    mbiP.sequence_data = in_data->sequence_data;
    mbiP.in_data = in_data;

    // Calculate area to process
    A_long linesL = output->height;
    PF_Rect area = { 0, 0, output->width, output->height };

    // Apply blur using our pixel functions
    if (PF_WORLD_IS_DEEP(output)) {
        ERR(suites.Iterate16Suite1()->iterate(in_data,
            0,
            linesL,
            &params[MOTIONBLUR_INPUT]->u.ld,
            &area,
            (void*)&mbiP,
            MotionBlurFunc16,
            output));
    }
    else {
        ERR(suites.Iterate8Suite1()->iterate(in_data,
            0,
            linesL,
            &params[MOTIONBLUR_INPUT]->u.ld,
            &area,
            (void*)&mbiP,
            MotionBlurFunc8,
            output));
    }

    return err;
}

extern "C" DllExport
PF_Err PluginDataEntryFunction(
    PF_PluginDataPtr inPtr,
    PF_PluginDataCB inPluginDataCallBackPtr,
    SPBasicSuite* inSPBasicSuitePtr,
    const char* inHostName,
    const char* inHostVersion)
{
    PF_Err result = PF_Err_INVALID_CALLBACK;

    result = PF_REGISTER_EFFECT(
        inPtr,
        inPluginDataCallBackPtr,
        "Motion Blur", // Name
        "DKT Motion Blur", // Match Name
        "Sample Plug-ins", // Category
        AE_RESERVED_INFO); // Reserved Info

    return result;
}

PF_Err
EffectMain(
    PF_Cmd            cmd,
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output,
    void* extra)
{
    PF_Err        err = PF_Err_NONE;

    try {
        switch (cmd) {
        case PF_Cmd_ABOUT:
            err = About(in_data, out_data, params, output);
            break;

        case PF_Cmd_GLOBAL_SETUP:
            err = GlobalSetup(in_data, out_data, params, output);
            break;

        case PF_Cmd_PARAMS_SETUP:
            err = ParamsSetup(in_data, out_data, params, output);
            break;

        case PF_Cmd_SEQUENCE_SETUP:
            err = SequenceSetup(in_data, out_data, params, output);
            break;

        case PF_Cmd_SEQUENCE_SETDOWN:
            err = SequenceSetdown(in_data, out_data, params, output);
            break;

        case PF_Cmd_FRAME_SETUP:
            err = FrameSetup(in_data, out_data, params, output);
            break;

        case PF_Cmd_RENDER:
            err = Render(in_data, out_data, params, output);
            break;
        }
    }
    catch (PF_Err& thrown_err) {
        err = thrown_err;
    }
    return err;
}
Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Mar 28, 2025 Mar 28, 2025

when the source of an effect is non changing over time, and the efect params are not animating, AE assumes it's safe to cache the result an not re-render it. the layer itself animating in the comp is irrelevant to that decision.

you can do 2 things:

1. set PF_OutFlag_NON_PARAM_VARY on global setup. this will tell AE you might need to render regardless of the source or params not changing.

2. read up on GuidMixInPtr(). you can use it to let AE know during pre-render whether it can or can not use the cached output for the current frame.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 28, 2025 Mar 28, 2025

Yep, GuidMixInPtr() did the trick. Man you are a GOAT I swear😭. Can't believe I actually made it work. I can finally start working on the blur itself now.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 28, 2025 Mar 28, 2025

Hmm, but from what I can see, it does not detect the motion of other effects like S_BlurMoCurves or any other Transform effect. Is it even possible to make it detect that motion?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Mar 28, 2025 Mar 28, 2025

i still don't fully understand what you mean by "detect the motion"... can you give some examples?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 28, 2025 Mar 28, 2025

I mean literally make it apply blur based on the effects' transformations, like it applies on the layer when I change the position, scale etc.... Effects like the built-in Transform or S_BlurMoCurves or any transform effect can change the layer's position, scale etc too, but my current motion blur does not detect when those effects alter the layer's transforms.

 

For example: I add Transform and make it move the layer from 540 to 1080 on the X axis from frame 1 till 10. The motion blur effect should apply blur on that motion, but it doesn't since it doesn't detect it because it's from an effect not from the layer itself. I am asking if it is possible to make Motion Blur apply blur on other effects too, like Transform. Is it possible to make it detect that motion too?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Mar 29, 2025 Mar 29, 2025

ok... there's a bit of a misconception here...

generally speaking, "transformations" referes only to the layer transformation, which happens with the image that is the result of all the effects applied to the layer. each of these effects operates on the output of the previous effect, with the first in the stack operating on the layer's source. that texture is then transformed and composited according to the layer transformation params (and other stuff like parenting and camera).

 

when you'r talking about "effect transformation" you're talking about the ever-so-rare occasion where an effect applies 2d or 3d transformation within itslef, where it takes it's input buffer, and transforms it onto it's own output buffer. these transformations are baked into the layer's resulting texture, and AE is oblivious to them. for instance, or some effect internally does a 90 degree Y rotation which results in the layer facing exacltly sideways on not being seen from the front, then rotating the layer another 90 degrees on the Y axis won't reveal back the content. it's just a blank 2D texture at that point.

 

what does all that mean? when you're talking aobut transformations happening within some effect, that operation isn't "registered" anywhere, and is not considered a "transformation" by ae. it's all filtration/generation/manipulation.

some effects do transformations in an obvious way, where there are clearly named params that do position and rotation changes. some effects, like card dance, break the layer to small pieces, each with it's own transformation. so in such a case there is no "one transformation value" for the whole image within that effect.

 

summing up:

1. if you KNOW an effect (like "transform" or "S_BlurMoCurves"), you may read it's params and deduce the transformation. but you need to set your effect up to read each such effect's param in advance. there's no gerneral way of knowing which param on some random effect does what exactly...

 

2. some effect (like card dance, or even fractal noise), evolve the layer in ways that can't be described by "a transfomation". in such cases the way to motion-blur them would be to:
    a. sample their results at small time intervals and average these images to get a a"smear" between time a and time b.
    b. compare the image of current frame with the image of the next one and do some "pixel analysis" on them to get motion vectors. in AE, you can't request a previous effect's result at dfferent times. you can only request a layer's source, with or without masks and effects at different times. so using this method your effect could motion blur the content of a comp, but not previous effects on the same layer.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 29, 2025 Mar 29, 2025

Alright, so it's basically impossible to make it work on random effects (because i dont have their code), but it is possible to make it work on my own plugins. That's perfect honestly. How do I do it?💀

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines