# AE Native 3D Layer Matching: Complete Technical Breakdown
## Executive Summary
The AE Native mode successfully achieves pixel-perfect alignment between After Effects' 3D layers and our plugin's system. The breakthrough came from abandoning complex matrix transformations and AE's native APIs in favor of implementing a direct, mathematically precise projection that exactly replicates AE's behavior.
## The Core Challenge
The plugin needed to:
1. Extract 3D layer transforms from After Effects compositions
2. Project these 3D positions to 2D screen space for texture mapping
3. Match AE's internal projection exactly so wireframes align with native renders
The cyan wireframe (our projection) wasn't matching the purple solid (AE's native rendering) due to:
- Misunderstanding of AE's projection parameters
- Incorrect rotation order and composition
- Complex matrix transformations introducing errors
## Failed Approaches
### 1. Using AE's Native Transformation APIs
**Plan:** Use `AEGP_WorldToComp()`, `AEGP_GetEffectCamera()`, and `AEGP_GetEffectCameraMatrix()`
**Result:** Failed because:
- AEGP_GetEffectCamera() doesn't work in the rendering context
- The transformation pipeline was too complex to integrate correctly
- AE's internal coordinate conversions were opaque
### 2. Matrix-Based Transformation Pipeline
**Plan:** Create view and projection matrices mimicking standard 3D graphics
**Result:** Failed because:
- Matrix multiplication order confusion
- Transpose issues between row-major and column-major conventions
- Accumulated floating-point errors
### 3. Direct Mapping for Non-Rotated Cameras
**Plan:** Special case for cameras with no rotation (orthographic-style)
**Result:** Failed because:
- AE always uses perspective projection for 3D cameras
- Created inconsistencies between rotated and non-rotated cases
## The Successful Solution
### Key Insight #1: Camera Zoom = Focal Length in Pixels
The critical breakthrough was understanding that AE's "zoom" parameter IS the focal length directly in pixels, not a scaling factor:
```cpp
// CORRECT: AE uses zoom value directly as focal length in pixels
float focal_length = camera->zoom;
float proj_x = (view_x * focal_length) / view_z;
float proj_y = (view_y * focal_length) / view_z;
// This matches AE's formula: AOV = 2 * atan((comp.width/2) / zoom)
```
### Key Insight #2: Unified Projection Function
Created a single authoritative projection function (`projectToScreenUnified`) that handles all cases:
```cpp
bool NativeAEProcessor::projectToScreenUnified(
float world_x, float world_y, float world_z,
const CameraInfo* camera,
float& screen_x, float& screen_y,
bool use_debug,
PF_InData* in_data,
AEGP_SuiteHandler* suites)
```
This function:
1. Transforms world coordinates to camera space
2. Applies inverse camera rotation in the correct order
3. Uses perspective projection with AE's zoom as focal length
4. Converts to screen coordinates with proper centering
### Key Insight #3: Correct Rotation Order (Z-Y-X)
After Effects applies rotations in Z-Y-X order, which is reverse of typical 3D graphics:
```cpp
// Apply inverse camera rotation (AE uses Z-Y-X order)
if (camera->rotation.x != 0) {
float cos_x = cosf(-camera->rotation.x);
float sin_x = sinf(-camera->rotation.x);
float new_y = view_y * cos_x - view_z * sin_x;
float new_z = view_y * sin_x + view_z * cos_x;
view_y = new_y;
view_z = new_z;
}
if (camera->rotation.y != 0) {
float cos_y = cosf(-camera->rotation.y);
float sin_y = sinf(-camera->rotation.y);
float new_x = view_x * cos_y + view_z * sin_y;
float new_z = -view_x * sin_y + view_z * cos_y;
view_x = new_x;
view_z = new_z;
}
if (camera->rotation.z != 0) {
float cos_z = cosf(-camera->rotation.z);
float sin_z = sinf(-camera->rotation.z);
float new_x = view_x * cos_z - view_y * sin_z;
float new_y = view_x * sin_z + view_y * cos_z;
view_x = new_x;
view_y = new_y;
}
```
### Key Insight #4: Direct Transformations (No Matrices)
Instead of complex matrix multiplications, use direct transformations:
```cpp
// Transform to camera space
float cam_x = world_x - camera->position.x;
float cam_y = world_y - camera->position.y;
float cam_z = world_z - camera->position.z;
// Then apply rotations and projection directly
```
This avoids:
- Matrix transpose confusion
- Multiplication order errors
- Accumulated floating-point errors
### Key Insight #5: Always Use Perspective Projection
Disabled the "direct mapping" mode that tried to handle non-rotated cameras differently:
```cpp
// DISABLE direct mapping - always use perspective for 3D cameras
// Direct mapping was causing incorrect projection!
bool is_direct_camera = false; // ALWAYS use perspective projection
```
## Layer Extraction and Processing
### NativeLayerScanner: Extracting 3D Layer Information
The scanner extracts layer properties from the composition:
```cpp
// 1. Get position
AEGP_GetNewLayerStream(layer, AEGP_LayerStream_POSITION, &positionStream);
// 2. Get orientation (primary rotation)
AEGP_GetNewLayerStream(layer, AEGP_LayerStream_ORIENTATION, &orientationStream);
// 3. Get additional rotation axes
AEGP_GetNewLayerStream(layer, AEGP_LayerStream_ROTATE_X, &rotationXStream);
AEGP_GetNewLayerStream(layer, AEGP_LayerStream_ROTATE_Y, &rotationYStream);
AEGP_GetNewLayerStream(layer, AEGP_LayerStream_ROTATE_Z, &rotationZStream);
// 4. Get scale
AEGP_GetNewLayerStream(layer, AEGP_LayerStream_SCALE, &scaleStream);
```
**Important:** After Effects applies Orientation first, then Rotation. Both are in degrees and need conversion to radians.
### QuadToAlembicConverter: Creating World-Space Meshes
The converter builds mesh vertices in world space:
```cpp
// 1. Create local vertices (centered at origin)
float halfWidth = layerWidth * 0.5f;
float halfHeight = layerHeight * 0.5f;
vertices = {
{-halfWidth, -halfHeight, 0}, // top-left
{ halfWidth, -halfHeight, 0}, // top-right
{ halfWidth, halfHeight, 0}, // bottom-right
{-halfWidth, halfHeight, 0} // bottom-left
};
// 2. Apply scale
vertex.x *= scale.x;
vertex.y *= scale.y;
vertex.z *= scale.z;
// 3. Apply rotation (Z-Y-X order matching AE)
// 4. Apply position translation
vertex += position;
```
## The Complete Projection Pipeline
### 1. World Space
Layer vertices are correctly positioned in 3D world coordinates based on:
- Position from layer stream
- Rotation (Orientation + Rotation values)
- Scale from layer stream
- Layer dimensions (width/height for solids)
### 2. Camera Space Transformation
```cpp
// Subtract camera position
float cam_x = world_x - camera->position.x;
float cam_y = world_y - camera->position.y;
float cam_z = world_z - camera->position.z;
// Apply inverse camera rotation (Z-Y-X order)
// ... rotation code as shown above ...
```
### 3. Perspective Projection
```cpp
// Use AE's zoom value as focal length in pixels
float focal_length = camera->zoom;
// Standard perspective division
float proj_x = (view_x * focal_length) / view_z;
float proj_y = (view_y * focal_length) / view_z;
```
### 4. Screen Space Conversion
```cpp
// Convert to screen coordinates (Y-down, centered)
screen_x = proj_x + (camera->compWidth * 0.5f);
screen_y = proj_y + (camera->compHeight * 0.5f);
```
## Critical Technical Details
### Y-Down Coordinate System
After Effects uses Y-down screen coordinates:
- Origin (0,0) is top-left
- Y increases downward
- This is handled in the final screen space conversion
### Depth Testing
Properly handles vertices behind the camera:
```cpp
if (view_z <= 0.1f) {
return false; // Behind camera, don't render
}
```
### Bit Depth Independence
The solution works across all bit depths:
```cpp
// Detect bit depth from rowbytes
bool is_32bit = (rowbytes >= width * sizeof(PF_PixelFloat));
bool is_16bit = PF_WORLD_IS_DEEP(layer);
// Use normalized float colors internally
FloatColor color(r, g, b, a); // Always 0.0-1.0 range
// Convert appropriately when reading/writing
if (is_32bit) {
pixel->red = color.r; // Direct assignment
} else if (is_16bit) {
pixel->red = (A_u_short)(color.r * PF_MAX_CHAN16);
} else {
pixel->red = (unsigned char)(color.r * 255.0f);
}
```
### Camera Information Extraction
The camera's properties are extracted similarly to layers:
```cpp
// Get camera zoom (focal length)
AEGP_GetNewLayerStream(camera_layer, AEGP_LayerStream_ZOOM, &zoomStream);
// Calculate FOV from zoom
float focal_length_mm = zoom_pixels * sensor_width_mm / comp_width_pixels;
float fov_radians = 2.0f * atan(sensor_width_mm / (2.0f * focal_length_mm));
```
## Why It Works Now
### 1. No Matrix Confusion
Direct transformations avoid:
- Row-major vs column-major confusion
- Matrix multiplication order errors
- Transpose mistakes
### 2. Correct Parameter Interpretation
- Understanding zoom = focal length in pixels was crucial
- Proper degree-to-radian conversion for rotations
- Correct composition of Orientation + Rotation
### 3. Proper Rotation Order
- Matching AE's Z-Y-X rotation order exactly
- Using inverse rotations for camera transformation
- Consistent application across all transformations
### 4. Unified Code Path
- Single projection function eliminates inconsistencies
- No special cases for different camera types
- Same math for all scenarios
### 5. Always Perspective
- No orthographic special cases
- Consistent perspective projection for all 3D cameras
- Matches AE's actual behavior
## Performance Optimizations
### Hash-Based Change Detection
Added hash functions to detect when transforms change:
```cpp
uint32_t NativeAEProcessor::GetCameraHash(const CameraInfo& camera);
uint32_t NativeLayerScanner::GetSceneHash() const;
uint32_t NativeLayerScanner::GetLayerHash(const Layer3DInfo& layer);
```
This enables:
- Caching of projection results
- Avoiding redundant calculations
- Efficient dirty-flag systems
## Implementation Timeline
### Commit History
1. **dd1a3e4**: "AE Native Basic, Modular, but not working" - Initial modular structure
2. **3d6351a**: "AE Native: Much closer" - Fixed camera space transformation
3. **6d5d294**: "AE Native Projection using rotation now, but wrong dimensions" - Added rotation support
4. **982a579**: "AR Native 3d layers and cam match up" - **BREAKTHROUGH** - Perfect alignment achieved
## Key Files and Functions
### Core Implementation Files
- `src/native/NativeAEProcessor.cpp`: Contains unified projection function
- `src/native/NativeLayerScanner.cpp`: Extracts layer transforms
- `src/native/QuadToAlembicConverter.cpp`: Converts layers to meshes
### Critical Functions
- `NativeAEProcessor::projectToScreenUnified()`: Main projection function
- `NativeAEProcessor::projectToScreenAENative()`: Simplified fallback
- `NativeLayerScanner::GetLayerInfo()`: Extract layer transforms
- `QuadToAlembicConverter::ConvertQuadToAlembic()`: Build world-space mesh
## Lessons Learned
1. **Understand the host application's model**: AE's zoom parameter and rotation order were key
2. **Avoid over-engineering**: Direct transformations beat complex matrix math
3. **Single source of truth**: One projection function prevents inconsistencies
4. **Test incrementally**: Each commit built on working foundations
5. **Debug visually**: Wireframe overlays made alignment issues obvious
## Future Improvements
1. **Use AEGP_GetLayerToWorldXform**: Could simplify transform extraction
2. **Cache projection results**: Use hash system for performance
3. **Support more layer types**: Extend beyond solids and cameras
4. **Optimize for GPU**: Move projection to shader for better performance
## Conclusion
The successful implementation proves that matching After Effects' 3D projection doesn't require access to its internal APIs or complex matrix mathematics. By understanding the fundamental parameters (zoom as focal length, Z-Y-X rotation order, Y-down coordinates) and implementing them directly, we achieved pixel-perfect alignment between our plugin's rendering and AE's native 3D layers. The key was simplification: removing special cases, using direct transformations, and creating a single, authoritative projection function that handles all scenarios consistently.