Skip to main content
dkt0
Inspiring
March 26, 2025
Question

Motion Blur - Accessing Layer's Transforms

  • March 26, 2025
  • 2 replies
  • 6971 views

Yo

 

I've been trying for quite some time to make Motion Blur as a normal effect plug-in and I want to ask if it is even possible. Can I access the layer's transforms through a normal effect plug-in? What do I need to use?

 

I wanna make sure if it is possible or not. The only other way is making it an AEGP, right? I would like to avoid that.

2 replies

Warren Heaton
Community Expert
Community Expert
March 26, 2025

I don't know if this will help with what you are trying to achive or not, but the third-party plugin ReelSmart Motion Blur by RE:Vision Effects is well known for applying a natural-looking motion blur via an effect as an alternative to the built-in Motion Blur Switch.

dkt0
dkt0Author
Inspiring
March 26, 2025

I know about RSMB, but I want to code my own Motion Blur plugin. Thanks anyway.

Community Expert
March 26, 2025

either AEGP_GetLayerToWorldXform or AEGP_GetLayerToWorldXformFromView will get you tra transformation for a given time, which you can use to calc motion blur.

however, as for the rest of your question, it's a bit too broad. can you explain what you're trying to do in more detail?

dkt0
dkt0Author
Inspiring
March 26, 2025

Thanks. 

 

At least now I know I can make it. Any simple example of how to use those to access the transforms? Would be helpful tbh.

 

And for the rest of the question, I was talking about making Motion Blur as an AEGP (Artisan ig) if making it a normal effect plugin was not an option.

 

Also, for a tiles effect, I use the same stuff, right? I need to access the layer's transforms for that one too.

Community Expert
March 28, 2025

Ye, just got a new problem😭

 

I think I managed to properly access the layer's transforms, BUT the problem now is that once a frame gets processed as "motion" or "no motion", it remains that way permanently, it does not get re-processed. So basically if I add the plugin and I process the first 10 frames, for example, as "no motion", they remain that way even if I add keyframes. And the other way around.

 

From what the AI says the problem is with cached frames or something.

#include "MotionBlur.h"
#include "AE_Macros.h"
#include "AE_Effect.h"
#include "AE_EffectCB.h"
#include "AE_EffectSuites.h"
#include <math.h>
#include <fstream>
#include <ctime>
#include <string>

// Define M_PI if it's not defined
#ifndef M_PI
#define M_PI 3.14159265358979323846
#endif

#ifndef kPF_WorldTransformSuite
#define kPF_WorldTransformSuite "PF World Transform Suite"
#define kPF_WorldTransformSuiteVersion1 1
#endif

// Define PF_NewWorldFlag_DEEP if not defined
#ifndef PF_NewWorldFlag_DEEP
#define PF_NewWorldFlag_DEEP 1
#endif

// Add this near the top of your file
static AEGP_PluginID g_plugin_id = NULL;

// Structure to store layer transform data
typedef struct {
    PF_FpLong position_x;  // Position X
    PF_FpLong position_y;  // Position Y
    PF_FpLong scale_x;     // Scale X
    PF_FpLong scale_y;     // Scale Y
    PF_FpLong rotation;    // Rotation
} LayerTransformData;

// Modified MotionBlurSequenceData structure
typedef struct {
    LayerTransformData prev_transform;
    LayerTransformData curr_transform;
    PF_Boolean transform_changed;
    PF_FpLong motion_x;
    PF_FpLong motion_y;
    PF_FpLong scale_factor;
    PF_FpLong rotation_delta;
    // Add a counter for frames with movement
    A_long movement_frames_count;
    // Add a flag to indicate current frame has movement
    PF_Boolean current_frame_has_movement;
    // Add a timestamp for the last calculation
    A_long last_calculation_time;
    // Add a frame cache to track which frames have motion (up to 1000 frames)
    // NOTE: We'll still keep this for logging/debugging, but won't rely on it for motion detection
    PF_Boolean frame_has_motion[1000];
    // Track the last processed frame
    A_long last_processed_frame;
} MotionBlurSequenceData;

// Enhanced debug logging function
static void
LogMotionDetection(
    PF_InData* in_data,
    MotionBlurSequenceData* seqP,
    const A_Matrix4* prev_matrix,
    const A_Matrix4* curr_matrix,
    const A_Matrix4* next_matrix,
    bool forceLog = false)
{
    // Log even if transform didn't change when forceLog is true
    if (!seqP || (!seqP->transform_changed && !forceLog)) {
        return;
    }

    // Create log file path
    std::string logFilePath = "D:\\MotionBlurDebug.log";

    // Open log file in append mode
    std::ofstream logFile(logFilePath.c_str(), std::ios::app);

    if (!logFile.is_open()) {
        return; // Failed to open file
    }

    // Get current time for log timestamp
    time_t now = time(0);
    char timeStr[26];
    ctime_s(timeStr, sizeof(timeStr), &now);
    timeStr[24] = '\0'; // Remove newline

    // Write log header with timestamp and frame info
    logFile << "==== Motion Detection Log: " << timeStr << " ====" << std::endl;
    logFile << "Frame: " << in_data->current_time << "/" << in_data->time_scale << std::endl;
    logFile << "Total Frames with Movement: " << seqP->movement_frames_count << std::endl;

    // Log motion values
    logFile << "Motion X: " << seqP->motion_x << " pixels/frame" << std::endl;
    logFile << "Motion Y: " << seqP->motion_y << " pixels/frame" << std::endl;
    logFile << "Scale Factor: " << seqP->scale_factor << " change/frame" << std::endl;
    logFile << "Rotation Delta: " << seqP->rotation_delta << " degrees/frame" << std::endl;

    // Log motion magnitude
    PF_FpLong motionMagnitude = sqrt(seqP->motion_x * seqP->motion_x + seqP->motion_y * seqP->motion_y);
    logFile << "Motion Magnitude: " << motionMagnitude << " pixels/frame" << std::endl;

    // Log transform status
    logFile << "Transform Changed: " << (seqP->transform_changed ? "YES" : "NO") << std::endl;

    // Log threshold values
    logFile << "Position Threshold: " << 0.1 << " pixels/frame" << std::endl;
    logFile << "Scale Threshold: " << 0.001 << " change/frame" << std::endl;
    logFile << "Rotation Threshold: " << 0.1 << " degrees/frame" << std::endl;

    // Log matrix data if available
    if (prev_matrix && curr_matrix && next_matrix) {
        logFile << "\nMatrix Data:" << std::endl;

        // Log current matrix
        logFile << "Current Matrix:" << std::endl;
        for (int i = 0; i < 4; i++) {
            logFile << "  [";
            for (int j = 0; j < 4; j++) {
                logFile << curr_matrix->mat[i][j];
                if (j < 3) logFile << ", ";
            }
            logFile << "]" << std::endl;
        }

        // Log position data from matrices
        logFile << "\nPosition from matrices:" << std::endl;
        logFile << "  Prev: [" << prev_matrix->mat[3][0] << ", " << prev_matrix->mat[3][1] << ", " << prev_matrix->mat[3][2] << "]" << std::endl;
        logFile << "  Curr: [" << curr_matrix->mat[3][0] << ", " << curr_matrix->mat[3][1] << ", " << curr_matrix->mat[3][2] << "]" << std::endl;
        logFile << "  Next: [" << next_matrix->mat[3][0] << ", " << next_matrix->mat[3][1] << ", " << next_matrix->mat[3][2] << "]" << std::endl;
    }

    // Add separator for readability
    logFile << "----------------------------------------" << std::endl;

    // Close the log file
    logFile.close();
}

static PF_Err
About(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    suites.ANSICallbacksSuite1()->sprintf(out_data->return_msg,
        "%s v%d.%d\r%s",
        STR_NAME,
        MAJOR_VERSION,
        MINOR_VERSION,
        STR_DESCRIPTION);
    return PF_Err_NONE;
}

static PF_Err
GlobalSetup(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    // Use the exact version number from the PiPL file (524288 = 0x80000)
    out_data->my_version = 524288; // Version 1.0 as defined in PiPL

    out_data->out_flags = PF_OutFlag_DEEP_COLOR_AWARE;    // just 16bpc, not 32bpc
    out_data->out_flags |= PF_OutFlag_I_USE_SHUTTER_ANGLE; // Tell AE we need shutter angle info
    out_data->out_flags |= PF_OutFlag_WIDE_TIME_INPUT;    // We need to check parameters at different times
    out_data->out_flags2 = PF_OutFlag2_NONE;

    return PF_Err_NONE;
}

static PF_Err
ParamsSetup(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    PF_Err        err = PF_Err_NONE;
    PF_ParamDef    def;

    AEFX_CLR_STRUCT(def);

    // Add Tune slider
    PF_ADD_FLOAT_SLIDERX(STR_TUNE_NAME,
        0,
        4,
        0,
        4,
        1,
        PF_Precision_INTEGER,
        0,
        0,
        TUNE_DISK_ID);

    // Add Position checkbox
    AEFX_CLR_STRUCT(def);
    PF_ADD_CHECKBOX(STR_POSITION_NAME,
        "On",
        TRUE,
        0,
        POSITION_DISK_ID);

    // Add Scale checkbox
    AEFX_CLR_STRUCT(def);
    PF_ADD_CHECKBOX(STR_SCALE_NAME,
        "On",
        TRUE,
        0,
        SCALE_DISK_ID);

    // Add Angle checkbox
    AEFX_CLR_STRUCT(def);
    PF_ADD_CHECKBOX(STR_ANGLE_NAME,
        "On",
        TRUE,
        0,
        ANGLE_DISK_ID);

    out_data->num_params = MOTIONBLUR_NUM_PARAMS;

    return err;
}

// Function to get pixel value with bounds checking (16-bit)
static PF_Err
GetPixel16(
    PF_EffectWorld* input,
    int              x,
    int              y,
    PF_Pixel16* outP)
{
    PF_Err err = PF_Err_NONE;

    // Bounds checking
    if (x < 0) x = 0;
    if (y < 0) y = 0;
    if (x >= input->width) x = input->width - 1;
    if (y >= input->height) y = input->height - 1;

    // Get the pixel
    PF_Pixel16* pixelP = (PF_Pixel16*)((char*)input->data +
        (y * input->rowbytes) +
        (x * sizeof(PF_Pixel16)));

    *outP = *pixelP;

    return err;
}

// Function to get pixel value with bounds checking (8-bit)
static PF_Err
GetPixel8(
    PF_EffectWorld* input,
    int              x,
    int              y,
    PF_Pixel8* outP)
{
    PF_Err err = PF_Err_NONE;

    // Bounds checking
    if (x < 0) x = 0;
    if (y < 0) y = 0;
    if (x >= input->width) x = input->width - 1;
    if (y >= input->height) y = input->height - 1;

    // Get the pixel
    PF_Pixel8* pixelP = (PF_Pixel8*)((char*)input->data +
        (y * input->rowbytes) +
        (x * sizeof(PF_Pixel8)));

    *outP = *pixelP;

    return err;
}

// Helper function to extract rotation from a matrix
PF_FpLong extract_rotation(const A_Matrix4& matrix, PF_FpLong scale_x, PF_FpLong scale_y) {
    // Use atan2 on the normalized first column to get rotation
    if (scale_x > 0.0001) {
        PF_FpLong n00 = matrix.mat[0][0] / scale_x;
        PF_FpLong n01 = matrix.mat[0][1] / scale_x;
        return atan2(n01, n00) * 180.0 / M_PI;
    }
    return 0.0;
}

// Modified CalculateTransformChanges to properly recalculate motion state every time
static PF_Err
CalculateTransformChanges(
    PF_InData* in_data,
    MotionBlurSequenceData* seqP)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    // Get current frame number (normalized to 0-999 range for our cache)
    A_long current_frame = (in_data->current_time / in_data->time_step) % 1000;

    // Store previous motion state for this frame
    bool previous_state = seqP->frame_has_motion[current_frame];

    // IMPORTANT FIX: Always reset motion values at the start of calculation
    // This ensures we don't use stale values if no motion is detected
    seqP->motion_x = 0;
    seqP->motion_y = 0;
    seqP->scale_factor = 0;
    seqP->rotation_delta = 0;
    seqP->transform_changed = false;
    seqP->current_frame_has_movement = false;

    // Get the PF Interface Suite directly
    AEGP_PFInterfaceSuite1* pfInterfaceSuite = suites.PFInterfaceSuite1();

    if (pfInterfaceSuite) {
        // Get the layer handle for the current effect
        AEGP_LayerH layerH = NULL;
        err = pfInterfaceSuite->AEGP_GetEffectLayer(in_data->effect_ref, &layerH);

        if (!err && layerH) {
            // Get the LayerSuite directly
            AEGP_LayerSuite9* layerSuite = suites.LayerSuite9();

            if (layerSuite) {
                // Get current time
                A_Time current_time;
                current_time.value = in_data->current_time;
                current_time.scale = in_data->time_scale;

                // Calculate previous and next times for velocity calculation
                A_Time prev_time, next_time;
                prev_time.scale = next_time.scale = in_data->time_scale;

                // Use a full frame step for detecting motion
                A_long time_step = in_data->time_step;

                prev_time.value = in_data->current_time - time_step;
                next_time.value = in_data->current_time + time_step;

                // Get layer transform at previous, current, and next times
                A_Matrix4 prev_matrix, curr_matrix, next_matrix;

                // Try to get the Matrices - passing NULL for plugin_id as advised
                A_Err err1 = layerSuite->AEGP_GetLayerToWorldXform(layerH, &prev_time, &prev_matrix);
                A_Err err2 = layerSuite->AEGP_GetLayerToWorldXform(layerH, &current_time, &curr_matrix);
                A_Err err3 = layerSuite->AEGP_GetLayerToWorldXform(layerH, &next_time, &next_matrix);

                if (!err1 && !err2 && !err3) {
                    // Extract position, scale, and rotation from matrices
                    LayerTransformData prev_transform, curr_transform, next_transform;

                    // Position is in the last column of the matrix
                    prev_transform.position_x = prev_matrix.mat[3][0];
                    prev_transform.position_y = prev_matrix.mat[3][1];
                    curr_transform.position_x = curr_matrix.mat[3][0];
                    curr_transform.position_y = curr_matrix.mat[3][1];
                    next_transform.position_x = next_matrix.mat[3][0];
                    next_transform.position_y = next_matrix.mat[3][1];

                    // Extract scale from matrices
                    prev_transform.scale_x = sqrt(prev_matrix.mat[0][0] * prev_matrix.mat[0][0] +
                        prev_matrix.mat[0][1] * prev_matrix.mat[0][1]);
                    prev_transform.scale_y = sqrt(prev_matrix.mat[1][0] * prev_matrix.mat[1][0] +
                        prev_matrix.mat[1][1] * prev_matrix.mat[1][1]);

                    curr_transform.scale_x = sqrt(curr_matrix.mat[0][0] * curr_matrix.mat[0][0] +
                        curr_matrix.mat[0][1] * curr_matrix.mat[0][1]);
                    curr_transform.scale_y = sqrt(curr_matrix.mat[1][0] * curr_matrix.mat[1][0] +
                        curr_matrix.mat[1][1] * curr_matrix.mat[1][1]);

                    next_transform.scale_x = sqrt(next_matrix.mat[0][0] * next_matrix.mat[0][0] +
                        next_matrix.mat[0][1] * next_matrix.mat[0][1]);
                    next_transform.scale_y = sqrt(next_matrix.mat[1][0] * next_matrix.mat[1][0] +
                        next_matrix.mat[1][1] * next_matrix.mat[1][1]);

                    // Extract rotation from matrices
                    prev_transform.rotation = extract_rotation(prev_matrix, prev_transform.scale_x, prev_transform.scale_y);
                    curr_transform.rotation = extract_rotation(curr_matrix, curr_transform.scale_x, curr_transform.scale_y);
                    next_transform.rotation = extract_rotation(next_matrix, next_transform.scale_x, next_transform.scale_y);

                    // Calculate velocities (change per frame)
                    PF_FpLong time_factor = (PF_FpLong)in_data->time_scale / time_step;

                    // Position velocity (pixels per frame)
                    seqP->motion_x = (curr_transform.position_x - prev_transform.position_x) * time_factor;
                    seqP->motion_y = (curr_transform.position_y - prev_transform.position_y) * time_factor;

                    // Scale velocity (change per frame)
                    PF_FpLong scale_x_vel = (curr_transform.scale_x - prev_transform.scale_x) * time_factor;
                    PF_FpLong scale_y_vel = (curr_transform.scale_y - prev_transform.scale_y) * time_factor;

                    // Use average of X and Y scale velocity, normalized by current scale
                    PF_FpLong avg_scale = (curr_transform.scale_x + curr_transform.scale_y) / 2.0;
                    if (avg_scale > 0.0001) {
                        seqP->scale_factor = (scale_x_vel + scale_y_vel) / (2.0 * avg_scale);
                    }
                    else {
                        seqP->scale_factor = 0;
                    }

                    // Rotation velocity (degrees per frame)
                    // Need to handle wraparound for rotation
                    PF_FpLong curr_rot = curr_transform.rotation;
                    PF_FpLong prev_rot = prev_transform.rotation;

                    // Normalize rotation difference to be in range [-180, 180]
                    PF_FpLong rot_diff = curr_rot - prev_rot;
                    while (rot_diff > 180.0) rot_diff -= 360.0;
                    while (rot_diff < -180.0) rot_diff += 360.0;

                    seqP->rotation_delta = rot_diff * time_factor;

                    // Determine if there's enough motion to apply blur
                    PF_FpLong pos_speed = sqrt(seqP->motion_x * seqP->motion_x +
                        seqP->motion_y * seqP->motion_y);

                    bool position_changed = (pos_speed > 0.1);  // 0.1 pixels per frame
                    bool scale_changed = (fabs(seqP->scale_factor) > 0.001);  // 0.1% per frame
                    bool rotation_changed = (fabs(seqP->rotation_delta) > 0.1);  // 0.1 degrees per frame

                    // CRITICAL FIX: Always set transform_changed based on current motion detection
                    // Don't rely on previously cached values
                    seqP->current_frame_has_movement = position_changed || scale_changed || rotation_changed;
                    seqP->transform_changed = seqP->current_frame_has_movement;

                    // Update the frame cache for this frame
                    seqP->frame_has_motion[current_frame] = seqP->transform_changed;

                    // Log motion detection information to file
                    LogMotionDetection(in_data, seqP, &prev_matrix, &curr_matrix, &next_matrix, true);

                    // CRITICAL FIX: Increment counter if this frame now has motion and either:
                    // 1. It's a new frame we haven't processed before, or
                    // 2. It previously didn't have motion but now does
                    if (seqP->transform_changed &&
                        (seqP->last_processed_frame != current_frame || !previous_state)) {
                        seqP->movement_frames_count++;
                    }

                    // Store current transform for next time
                    seqP->curr_transform = curr_transform;
                    seqP->prev_transform = prev_transform;
                }
                else {
                    // Log error
                    std::string logFilePath = "D:\\MotionBlurDebug.log";
                    std::ofstream logFile(logFilePath.c_str(), std::ios::app);
                    if (logFile.is_open()) {
                        logFile << "AEGP_GetLayerToWorldXform ERROR: "
                            << "prev=" << err1 << ", curr=" << err2 << ", next=" << err3
                            << " at frame " << current_frame << std::endl;
                        logFile.close();
                    }

                    // Report error
                    suites.UtilitySuite6()->AEGP_ReportInfo(NULL, "AEGP_GetLayerToWorldXform FAILED!");
                }

                // Update the timestamp and last processed frame
                seqP->last_calculation_time = in_data->current_time;
                seqP->last_processed_frame = current_frame;
            }
        }
    }

    return err;
}


static void
ClearMotionBlurState(MotionBlurSequenceData* seqP)
{
    // Reset all motion values to zero
    seqP->motion_x = 0;
    seqP->motion_y = 0;
    seqP->scale_factor = 0;
    seqP->rotation_delta = 0;

    // Reset flags
    seqP->transform_changed = false;
    seqP->current_frame_has_movement = false;
}

// Modified MotionBlurFunc16 function to always use current motion state
static PF_Err
MotionBlurFunc16(
    void* refcon,
    A_long        xL,
    A_long        yL,
    PF_Pixel16* inP,
    PF_Pixel16* outP)
{
    PF_Err err = PF_Err_NONE;

    MotionBlurInfo* mbiP = reinterpret_cast<MotionBlurInfo*>(refcon);
    if (!mbiP || !mbiP->input) {
        *outP = *inP; // Safety check
        return err;
    }

    // Initialize output with current pixel - always do this first
    *outP = *inP;

    // Skip blur if tune is zero
    if (mbiP->tuneF <= 0) {
        return err;
    }

    // Check if sequence data is available
    if (mbiP->sequence_data) {
        AEGP_SuiteHandler suites(mbiP->in_data->pica_basicP);

        // Lock handle to access sequence data
        MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
            suites.HandleSuite1()->host_lock_handle(mbiP->sequence_data));

        if (seqP) {
            // Add a red box in top-left corner ONLY if movement is detected in this frame
            bool in_movement_indicator_zone = (xL < 50 && yL < 50);

            if (in_movement_indicator_zone && seqP->transform_changed) {
                outP->red = 32768;    // Red (16-bit value)
                outP->green = 0;
                outP->blue = 0;
                outP->alpha = 32768;  // Full opacity (16-bit value)

                // Unlock handle before returning
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // CRITICAL FIX: ONLY apply blur if transform_changed is true for the current frame
            // Don't rely on cached frame_has_motion values
            if (!seqP->transform_changed) {
                // No movement in current frame, just return the original pixel
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // Motion values must be non-zero to apply blur
            bool has_position_motion = (fabs(seqP->motion_x) > 0.1 || fabs(seqP->motion_y) > 0.1);
            bool has_scale_motion = (fabs(seqP->scale_factor) > 0.001);
            bool has_rotation_motion = (fabs(seqP->rotation_delta) > 0.1);

            // If no actual motion values, don't apply blur
            if (!has_position_motion && !has_scale_motion && !has_rotation_motion) {
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // Calculate center of layer
            PF_FpLong center_x = mbiP->input->width / 2.0;
            PF_FpLong center_y = mbiP->input->height / 2.0;

            // Calculate pixel position relative to center
            PF_FpLong dx = xL - center_x;
            PF_FpLong dy = yL - center_y;

            // Initialize accumulators for the final color
            A_long redSum = inP->red;
            A_long greenSum = inP->green;
            A_long blueSum = inP->blue;
            A_long alphaSum = inP->alpha;
            int totalSamples = 1;  // Start with 1 for the original pixel

            // Position-based motion blur
            if (mbiP->position && has_position_motion) {
                // Get motion velocity scaled by tune parameter
                PF_FpLong vel_x = seqP->motion_x * mbiP->tuneF;
                PF_FpLong vel_y = seqP->motion_y * mbiP->tuneF;

                // Calculate speed in pixels per frame
                PF_FpLong speed = sqrt(vel_x * vel_x + vel_y * vel_y);

                // Calculate blur length based on shutter angle
                PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                PF_FpLong blur_length = speed * shutter_factor;

                // Determine number of samples based on blur length
                int nSamples = (int)MIN(MAX(blur_length * 2.0, 1.0), 100.0);

                if (nSamples > 1) {
                    // Sample along the motion vector
                    for (int i = 1; i < nSamples; i++) {
                        // Calculate offset: (i/(nSamples-1) - 0.5) * blur_length
                        PF_FpLong fraction = (PF_FpLong)i / (nSamples - 1) - 0.5;
                        PF_FpLong offset_x = vel_x * fraction * shutter_factor;
                        PF_FpLong offset_y = vel_y * fraction * shutter_factor;

                        // Calculate sample position
                        int sampleX = xL - (int)offset_x;
                        int sampleY = yL - (int)offset_y;

                        // Sample pixel with bounds checking
                        PF_Pixel16 sampledPixel;
                        GetPixel16(mbiP->input, sampleX, sampleY, &sampledPixel);

                        // Accumulate
                        redSum += sampledPixel.red;
                        greenSum += sampledPixel.green;
                        blueSum += sampledPixel.blue;
                        alphaSum += sampledPixel.alpha;
                        totalSamples++;
                    }
                }
            }

            // Scale-based motion blur
            if (mbiP->scale && has_scale_motion) {
                // Calculate distance from center
                PF_FpLong distance = sqrt(dx * dx + dy * dy);

                if (distance > 0.1) {
                    // Normalize vector from center to pixel
                    PF_FpLong dir_x = dx / distance;
                    PF_FpLong dir_y = dy / distance;

                    // Calculate scale velocity
                    PF_FpLong scale_vel = seqP->scale_factor * mbiP->tuneF;

                    // Calculate blur length based on distance and scale velocity
                    PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                    PF_FpLong blur_length = distance * fabs(scale_vel) * shutter_factor;

                    // Determine number of samples
                    int nSamples = (int)MIN(MAX(blur_length / 5.0, 1.0), 100.0);

                    if (nSamples > 1) {
                        // Sample along the radial direction
                        for (int i = 1; i < nSamples; i++) {
                            // Calculate offset: (i/(nSamples-1) - 0.5) * blur_length
                            PF_FpLong fraction = (PF_FpLong)i / (nSamples - 1) - 0.5;
                            PF_FpLong offset_magnitude = distance * scale_vel * fraction * shutter_factor;

                            // Calculate sample position
                            int sampleX = xL - (int)(dir_x * offset_magnitude);
                            int sampleY = yL - (int)(dir_y * offset_magnitude);

                            // Sample pixel with bounds checking
                            PF_Pixel16 sampledPixel;
                            GetPixel16(mbiP->input, sampleX, sampleY, &sampledPixel);

                            // Accumulate
                            redSum += sampledPixel.red;
                            greenSum += sampledPixel.green;
                            blueSum += sampledPixel.blue;
                            alphaSum += sampledPixel.alpha;
                            totalSamples++;
                        }
                    }
                }
            }

            // Rotation-based motion blur
            if (mbiP->angle && has_rotation_motion) {
                // Calculate distance from center
                PF_FpLong distance = sqrt(dx * dx + dy * dy);

                if (distance > 0.1) {
                    // Calculate rotation velocity in radians
                    PF_FpLong rot_vel = seqP->rotation_delta * M_PI / 180.0 * mbiP->tuneF;

                    // Calculate blur angle based on rotation velocity and shutter angle
                    PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                    PF_FpLong blur_angle = rot_vel * shutter_factor;

                    // Calculate arc length
                    PF_FpLong arc_length = distance * fabs(blur_angle);

                    // Determine number of samples
                    int nSamples = (int)MIN(MAX(arc_length / 5.0, 1.0), 100.0);

                    if (nSamples > 1) {
                        // Calculate half angle for symmetric sampling
                        PF_FpLong half_angle = blur_angle / 2.0;

                        // Sample along the arc
                        for (int i = 1; i < nSamples; i++) {
                            // Calculate sample angle
                            PF_FpLong sample_angle = -half_angle + blur_angle * (PF_FpLong)i / (PF_FpLong)nSamples;

                            // Calculate rotation matrix components
                            PF_FpLong sin_a = sin(sample_angle);
                            PF_FpLong cos_a = cos(sample_angle);

                            // Rotate point around center
                            PF_FpLong rotX = dx * cos_a - dy * sin_a;
                            PF_FpLong rotY = dx * sin_a + dy * cos_a;

                            // Calculate sample position
                            int sampleX = (int)(center_x + rotX);
                            int sampleY = (int)(center_y + rotY);

                            // Sample pixel with bounds checking
                            PF_Pixel16 sampledPixel;
                            GetPixel16(mbiP->input, sampleX, sampleY, &sampledPixel);

                            // Accumulate
                            redSum += sampledPixel.red;
                            greenSum += sampledPixel.green;
                            blueSum += sampledPixel.blue;
                            alphaSum += sampledPixel.alpha;
                            totalSamples++;
                        }
                    }
                }
            }

            // Average the accumulated values
            if (totalSamples > 1) {
                outP->red = (A_u_short)(redSum / totalSamples);
                outP->green = (A_u_short)(greenSum / totalSamples);
                outP->blue = (A_u_short)(blueSum / totalSamples);
                outP->alpha = (A_u_short)(alphaSum / totalSamples);
            }

            // Unlock handle
            suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
        }
    }

    return err;
}

// Modified MotionBlurFunc8 function to always use current motion state
static PF_Err
MotionBlurFunc8(
    void* refcon,
    A_long        xL,
    A_long        yL,
    PF_Pixel8* inP,
    PF_Pixel8* outP)
{
    PF_Err err = PF_Err_NONE;

    MotionBlurInfo* mbiP = reinterpret_cast<MotionBlurInfo*>(refcon);
    if (!mbiP || !mbiP->input) {
        *outP = *inP; // Safety check
        return err;
    }

    // Initialize output with current pixel - always do this first
    *outP = *inP;

    // Skip blur if tune is zero
    if (mbiP->tuneF <= 0) {
        return err;
    }

    // Check if sequence data is available
    if (mbiP->sequence_data) {
        AEGP_SuiteHandler suites(mbiP->in_data->pica_basicP);

        // Lock handle to access sequence data
        MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
            suites.HandleSuite1()->host_lock_handle(mbiP->sequence_data));

        if (seqP) {
            // Add a red box in top-left corner ONLY if movement is detected in this frame
            bool in_movement_indicator_zone = (xL < 50 && yL < 50);

            if (in_movement_indicator_zone && seqP->transform_changed) {
                outP->red = 255;    // Red (8-bit value)
                outP->green = 0;
                outP->blue = 0;
                outP->alpha = 255;  // Full opacity (8-bit value)

                // Unlock handle before returning
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // CRITICAL FIX: ONLY apply blur if transform_changed is true for the current frame
            // Don't rely on cached frame_has_motion values
            if (!seqP->transform_changed) {
                // No movement in current frame, just return the original pixel
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // Motion values must be non-zero to apply blur
            bool has_position_motion = (fabs(seqP->motion_x) > 0.1 || fabs(seqP->motion_y) > 0.1);
            bool has_scale_motion = (fabs(seqP->scale_factor) > 0.001);
            bool has_rotation_motion = (fabs(seqP->rotation_delta) > 0.1);

            // If no actual motion values, don't apply blur
            if (!has_position_motion && !has_scale_motion && !has_rotation_motion) {
                suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
                return err;
            }

            // Calculate center of layer
            PF_FpLong center_x = mbiP->input->width / 2.0;
            PF_FpLong center_y = mbiP->input->height / 2.0;

            // Calculate pixel position relative to center
            PF_FpLong dx = xL - center_x;
            PF_FpLong dy = yL - center_y;

            // Initialize accumulators for the final color
            A_long redSum = inP->red;
            A_long greenSum = inP->green;
            A_long blueSum = inP->blue;
            A_long alphaSum = inP->alpha;
            int totalSamples = 1;  // Start with 1 for the original pixel

            // Position-based motion blur
            if (mbiP->position && has_position_motion) {
                // Get motion velocity scaled by tune parameter
                PF_FpLong vel_x = seqP->motion_x * mbiP->tuneF;
                PF_FpLong vel_y = seqP->motion_y * mbiP->tuneF;

                // Calculate speed in pixels per frame
                PF_FpLong speed = sqrt(vel_x * vel_x + vel_y * vel_y);

                // Calculate blur length based on shutter angle
                PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                PF_FpLong blur_length = speed * shutter_factor;

                // Determine number of samples based on blur length
                int nSamples = (int)MIN(MAX(blur_length * 2.0, 1.0), 30.0);

                if (nSamples > 1) {
                    // Sample along the motion vector
                    for (int i = 1; i < nSamples; i++) {
                        // Calculate offset: (i/(nSamples-1) - 0.5) * blur_length
                        PF_FpLong fraction = (PF_FpLong)i / (nSamples - 1) - 0.5;
                        PF_FpLong offset_x = vel_x * fraction * shutter_factor;
                        PF_FpLong offset_y = vel_y * fraction * shutter_factor;

                        // Calculate sample position
                        int sampleX = xL - (int)offset_x;
                        int sampleY = yL - (int)offset_y;

                        // Sample pixel with bounds checking
                        PF_Pixel8 sampledPixel;
                        GetPixel8(mbiP->input, sampleX, sampleY, &sampledPixel);

                        // Accumulate
                        redSum += sampledPixel.red;
                        greenSum += sampledPixel.green;
                        blueSum += sampledPixel.blue;
                        alphaSum += sampledPixel.alpha;
                        totalSamples++;
                    }
                }
            }

            // Scale-based motion blur
            if (mbiP->scale && has_scale_motion) {
                // Calculate distance from center
                PF_FpLong distance = sqrt(dx * dx + dy * dy);

                if (distance > 0.1) {
                    // Normalize vector from center to pixel
                    PF_FpLong dir_x = dx / distance;
                    PF_FpLong dir_y = dy / distance;

                    // Calculate scale velocity
                    PF_FpLong scale_vel = seqP->scale_factor * mbiP->tuneF;

                    // Calculate blur length based on distance and scale velocity
                    PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                    PF_FpLong blur_length = distance * fabs(scale_vel) * shutter_factor;

                    // Determine number of samples
                    int nSamples = (int)MIN(MAX(blur_length / 5.0, 1.0), 30.0);

                    if (nSamples > 1) {
                        // Sample along the radial direction
                        for (int i = 1; i < nSamples; i++) {
                            // Calculate offset: (i/(nSamples-1) - 0.5) * blur_length
                            PF_FpLong fraction = (PF_FpLong)i / (nSamples - 1) - 0.5;
                            PF_FpLong offset_magnitude = distance * scale_vel * fraction * shutter_factor;

                            // Calculate sample position
                            int sampleX = xL - (int)(dir_x * offset_magnitude);
                            int sampleY = yL - (int)(dir_y * offset_magnitude);

                            // Sample pixel with bounds checking
                            PF_Pixel8 sampledPixel;
                            GetPixel8(mbiP->input, sampleX, sampleY, &sampledPixel);

                            // Accumulate
                            redSum += sampledPixel.red;
                            greenSum += sampledPixel.green;
                            blueSum += sampledPixel.blue;
                            alphaSum += sampledPixel.alpha;
                            totalSamples++;
                        }
                    }
                }
            }

            // Rotation-based motion blur
            if (mbiP->angle && has_rotation_motion) {
                // Calculate distance from center
                PF_FpLong distance = sqrt(dx * dx + dy * dy);

                if (distance > 0.1) {
                    // Calculate rotation velocity in radians
                    PF_FpLong rot_vel = seqP->rotation_delta * M_PI / 180.0 * mbiP->tuneF;

                    // Calculate blur angle based on rotation velocity and shutter angle
                    PF_FpLong shutter_factor = mbiP->in_data->shutter_angle / 360.0;
                    PF_FpLong blur_angle = rot_vel * shutter_factor;

                    // Calculate arc length
                    PF_FpLong arc_length = distance * fabs(blur_angle);

                    // Determine number of samples
                    int nSamples = (int)MIN(MAX(arc_length / 10.0, 1.0), 30.0);

                    if (nSamples > 1) {
                        // Calculate half angle for symmetric sampling
                        PF_FpLong half_angle = blur_angle / 2.0;

                        // Sample along the arc
                        for (int i = 1; i < nSamples; i++) {
                            // Calculate sample angle
                            PF_FpLong sample_angle = -half_angle + blur_angle * (PF_FpLong)i / (PF_FpLong)nSamples;

                            // Calculate rotation matrix components
                            PF_FpLong sin_a = sin(sample_angle);
                            PF_FpLong cos_a = cos(sample_angle);

                            // Rotate point around center
                            PF_FpLong rotX = dx * cos_a - dy * sin_a;
                            PF_FpLong rotY = dx * sin_a + dy * cos_a;

                            // Calculate sample position
                            int sampleX = (int)(center_x + rotX);
                            int sampleY = (int)(center_y + rotY);

                            // Sample pixel with bounds checking
                            PF_Pixel8 sampledPixel;
                            GetPixel8(mbiP->input, sampleX, sampleY, &sampledPixel);

                            // Accumulate
                            redSum += sampledPixel.red;
                            greenSum += sampledPixel.green;
                            blueSum += sampledPixel.blue;
                            alphaSum += sampledPixel.alpha;
                            totalSamples++;
                        }
                    }
                }
            }

            // Average the accumulated values
            if (totalSamples > 1) {
                outP->red = (A_u_char)(redSum / totalSamples);
                outP->green = (A_u_char)(greenSum / totalSamples);
                outP->blue = (A_u_char)(blueSum / totalSamples);
                outP->alpha = (A_u_char)(alphaSum / totalSamples);
            }

            // Unlock handle
            suites.HandleSuite1()->host_unlock_handle(mbiP->sequence_data);
        }
    }

    return err;
}

// Modified ResetSequenceData function to properly reset all state
static PF_Err
ResetSequenceData(PF_InData* in_data)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    if (in_data->sequence_data) {
        MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
            suites.HandleSuite1()->host_lock_handle(in_data->sequence_data));

        if (seqP) {
            // Reset all motion values
            seqP->motion_x = 0;
            seqP->motion_y = 0;
            seqP->scale_factor = 0;
            seqP->rotation_delta = 0;

            // Reset flags
            seqP->transform_changed = false;
            seqP->current_frame_has_movement = false;

            // Reset prev and curr transforms to identity
            seqP->prev_transform.position_x = 0;
            seqP->prev_transform.position_y = 0;
            seqP->prev_transform.scale_x = 1.0;
            seqP->prev_transform.scale_y = 1.0;
            seqP->prev_transform.rotation = 0;

            seqP->curr_transform.position_x = 0;
            seqP->curr_transform.position_y = 0;
            seqP->curr_transform.scale_x = 1.0;
            seqP->curr_transform.scale_y = 1.0;
            seqP->curr_transform.rotation = 0;

            // Reset frame cache - clear all frame motion flags
            for (int i = 0; i < 1000; i++) {
                seqP->frame_has_motion[i] = false;
            }

            // Reset counters
            seqP->movement_frames_count = 0;
            seqP->last_calculation_time = 0;
            seqP->last_processed_frame = -1;

            // Log reset
            std::string logFilePath = "D:\\MotionBlurDebug.log";
            std::ofstream logFile(logFilePath.c_str(), std::ios::app);
            if (logFile.is_open()) {
                // Get current time for log timestamp
                time_t now = time(0);
                char timeStr[26];
                ctime_s(timeStr, sizeof(timeStr), &now);
                timeStr[24] = '\0'; // Remove newline

                logFile << "==== SEQUENCE DATA RESET: " << timeStr << " ====" << std::endl;
                logFile << "----------------------------------------" << std::endl;
                logFile.close();
            }

            // Unlock handle
            suites.HandleSuite1()->host_unlock_handle(in_data->sequence_data);
        }
    }

    return err;
}

// Modify SequenceSetup to initialize sequence data properly
static PF_Err
SequenceSetup(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    // Allocate sequence data
    PF_Handle seqH = suites.HandleSuite1()->host_new_handle(sizeof(MotionBlurSequenceData));

    if (!seqH) {
        return PF_Err_OUT_OF_MEMORY;
    }

    // Lock handle and initialize sequence data
    MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
        suites.HandleSuite1()->host_lock_handle(seqH));

    if (!seqP) {
        suites.HandleSuite1()->host_dispose_handle(seqH);
        return PF_Err_OUT_OF_MEMORY;
    }

    // Initialize sequence data
    memset(seqP, 0, sizeof(MotionBlurSequenceData));

    // Set default values for transform data
    seqP->prev_transform.scale_x = 1.0;
    seqP->prev_transform.scale_y = 1.0;
    seqP->curr_transform.scale_x = 1.0;
    seqP->curr_transform.scale_y = 1.0;

    // Initialize frame cache - set all frames to no motion
    for (int i = 0; i < 1000; i++) {
        seqP->frame_has_motion[i] = false;
    }

    // Initialize movement frames counter
    seqP->movement_frames_count = 0;
    seqP->current_frame_has_movement = false;
    seqP->last_calculation_time = 0;
    seqP->last_processed_frame = -1;

    // Unlock handle
    suites.HandleSuite1()->host_unlock_handle(seqH);

    // Store sequence data
    out_data->sequence_data = seqH;

    return err;
}

// Sequence setup down to free sequence data
static PF_Err
SequenceSetdown(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    // Log final count of frames with movement
    if (in_data->sequence_data) {
        MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
            suites.HandleSuite1()->host_lock_handle(in_data->sequence_data));

        if (seqP) {
            // Create log file path
            std::string logFilePath = "D:\\MotionBlurDebug.log";

            // Open log file in append mode
            std::ofstream logFile(logFilePath.c_str(), std::ios::app);

            if (logFile.is_open()) {
                // Get current time for log timestamp
                time_t now = time(0);
                char timeStr[26];
                ctime_s(timeStr, sizeof(timeStr), &now);
                timeStr[24] = '\0'; // Remove newline

                // Write summary log
                logFile << "==== SEQUENCE COMPLETE: " << timeStr << " ====" << std::endl;
                logFile << "Total Frames with Movement: " << seqP->movement_frames_count << std::endl;
                logFile << "----------------------------------------" << std::endl << std::endl;

                // Close the log file
                logFile.close();
            }

            // Unlock handle
            suites.HandleSuite1()->host_unlock_handle(in_data->sequence_data);
        }

        // Free sequence data
        suites.HandleSuite1()->host_dispose_handle(in_data->sequence_data);
    }

    return err;
}

// In the FrameSetup function:

static PF_Err
FrameSetup(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    // Get sequence data
    if (in_data->sequence_data) {
        MotionBlurSequenceData* seqP = reinterpret_cast<MotionBlurSequenceData*>(
            suites.HandleSuite1()->host_lock_handle(in_data->sequence_data));

        if (seqP) {
            // Get current frame number
            A_long current_frame = (in_data->current_time / in_data->time_step) % 1000;

            // CRITICAL FIX: Reset the frame cache for the current frame
            // This ensures we always recalculate motion for every frame, even if processed before
            seqP->frame_has_motion[current_frame] = false;

            // Reset motion flags at the start of each frame
            seqP->current_frame_has_movement = false;
            seqP->transform_changed = false;

            // Reset motion values to zero at the start of each frame
            seqP->motion_x = 0;
            seqP->motion_y = 0;
            seqP->scale_factor = 0;
            seqP->rotation_delta = 0;

            // Always recalculate transform changes for each frame
            err = CalculateTransformChanges(in_data, seqP);

            // Unlock handle
            suites.HandleSuite1()->host_unlock_handle(in_data->sequence_data);
        }
    }

    return err;
}


static PF_Err
Render(
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output)
{
    PF_Err err = PF_Err_NONE;
    AEGP_SuiteHandler suites(in_data->pica_basicP);

    MotionBlurInfo mbiP;
    AEFX_CLR_STRUCT(mbiP);

    // Set up info structure
    mbiP.tuneF = params[MOTIONBLUR_TUNE]->u.fs_d.value;
    mbiP.position = params[MOTIONBLUR_POSITION]->u.bd.value;
    mbiP.scale = params[MOTIONBLUR_SCALE]->u.bd.value;
    mbiP.angle = params[MOTIONBLUR_ANGLE]->u.bd.value;
    mbiP.input = &params[MOTIONBLUR_INPUT]->u.ld;
    mbiP.sequence_data = in_data->sequence_data;
    mbiP.in_data = in_data;

    // Calculate area to process
    A_long linesL = output->height;
    PF_Rect area = { 0, 0, output->width, output->height };

    // Apply blur using our pixel functions
    if (PF_WORLD_IS_DEEP(output)) {
        ERR(suites.Iterate16Suite1()->iterate(in_data,
            0,
            linesL,
            &params[MOTIONBLUR_INPUT]->u.ld,
            &area,
            (void*)&mbiP,
            MotionBlurFunc16,
            output));
    }
    else {
        ERR(suites.Iterate8Suite1()->iterate(in_data,
            0,
            linesL,
            &params[MOTIONBLUR_INPUT]->u.ld,
            &area,
            (void*)&mbiP,
            MotionBlurFunc8,
            output));
    }

    return err;
}

extern "C" DllExport
PF_Err PluginDataEntryFunction(
    PF_PluginDataPtr inPtr,
    PF_PluginDataCB inPluginDataCallBackPtr,
    SPBasicSuite* inSPBasicSuitePtr,
    const char* inHostName,
    const char* inHostVersion)
{
    PF_Err result = PF_Err_INVALID_CALLBACK;

    result = PF_REGISTER_EFFECT(
        inPtr,
        inPluginDataCallBackPtr,
        "Motion Blur", // Name
        "DKT Motion Blur", // Match Name
        "Sample Plug-ins", // Category
        AE_RESERVED_INFO); // Reserved Info

    return result;
}

PF_Err
EffectMain(
    PF_Cmd            cmd,
    PF_InData* in_data,
    PF_OutData* out_data,
    PF_ParamDef* params[],
    PF_LayerDef* output,
    void* extra)
{
    PF_Err        err = PF_Err_NONE;

    try {
        switch (cmd) {
        case PF_Cmd_ABOUT:
            err = About(in_data, out_data, params, output);
            break;

        case PF_Cmd_GLOBAL_SETUP:
            err = GlobalSetup(in_data, out_data, params, output);
            break;

        case PF_Cmd_PARAMS_SETUP:
            err = ParamsSetup(in_data, out_data, params, output);
            break;

        case PF_Cmd_SEQUENCE_SETUP:
            err = SequenceSetup(in_data, out_data, params, output);
            break;

        case PF_Cmd_SEQUENCE_SETDOWN:
            err = SequenceSetdown(in_data, out_data, params, output);
            break;

        case PF_Cmd_FRAME_SETUP:
            err = FrameSetup(in_data, out_data, params, output);
            break;

        case PF_Cmd_RENDER:
            err = Render(in_data, out_data, params, output);
            break;
        }
    }
    catch (PF_Err& thrown_err) {
        err = thrown_err;
    }
    return err;
}

when the source of an effect is non changing over time, and the efect params are not animating, AE assumes it's safe to cache the result an not re-render it. the layer itself animating in the comp is irrelevant to that decision.

you can do 2 things:

1. set PF_OutFlag_NON_PARAM_VARY on global setup. this will tell AE you might need to render regardless of the source or params not changing.

2. read up on GuidMixInPtr(). you can use it to let AE know during pre-render whether it can or can not use the cached output for the current frame.