We have a brand new look! Take a tour with us and explore the latest updates on Adobe Support Community.
I have a (approx) 2min long drone shot of a high school parking lot. The shot pans along the path of the parking lot from one end to the other about 2 blocks long. This means no one element is in the whole shot.
I want to add text that is fixed (appears to be) the ground so it slides in and out of frame as the drone flies by.
The question is which tracking method AE or Mocha - Mocha's planar keeps dropping off as portions of the scene leave the frame -I've tried multiples but not much success jumping from one to the next.
Similar problem with AE Camera tracker.
Thanks in advance.
You basically already answered your own question - the only way to reliably track such shots is to use an app that supports scene stitching based on calibrated data and manual marker placement, meaning SynthEyes or whatever.
I would use Moha to create multiple trackers, apply transform data to nulls, and then just start stitching things together. In Mocha AE you can set layer in and out points, and you can do the same for your nulls. Unless there are some significant perspective shifts you want to include I would use Mocha AE. It will be faster.
The other option would be to split the footage layer just after the tracked objects leave the frame, adjust the in point of the top copy so that you have enough room to start tracking the second object, then repeat as necessary. Run camera tracker on each of the segments, then trim the cameras you create to the length of the tracked layer and start adding your other elements. I split up Camera Tracking all the time, keep the cameras and the tracked sections together. and a copy of the original footage uncut at the bottom. I pre-compose each tracked section with its camera, trimming the comps to the layer length, then setting the footage in the pre-comp as a guide layer and adding my 3D layers as needed.
As it turns out it was very, relatively, simple:
Track the camera in AE - this took about an hour.
Find a mid point in the video.
Select a set of tracking points.
Attach a text/camera.
Swap the text for a Cinema 4D file.
Make adjustments to C4D to ones liking -lighting/angle/texture etc.
The render took about 8 hours -original footage is 4K-
Upload to the tube https://youtu.be/25KjnwzK_5g
Nice, I like the transparency of the letters, although it's really bugged me that several intersect with ground objects know you've rendered and it took 8 hours, but I'd be sorely tempted to nudge the letters away from objects or provide a depth cue (like shadows on the ground to suggest the words are in mid air).
Also - and this is major nitpicking - but because of the direction of the flight, we're reading the words backwards and they don't appear onscreen all together ever. Maybe you could have a second set of words off in the distance so we can read the message, but then also marvel as the coolness of the words up close.
There are a couple of other "glitches" but overall...
I may revisit but need to get paying work out first.
BTW it actually took 12 hours but four of those was because someone who shall not be named renamed the network folder the original footage was sitting in halfway through the first atempt! ARGH!
I wanted to see if it was possible to run this over our network as opposed to direct/local drives.
It's possible but takes a s...ugar load of time 🙂
Life is a learning curve, enjoy the corners.