Skip to main content
Participant
June 19, 2023
Question

Recording transcript-based audio with body tracking

  • June 19, 2023
  • 1 reply
  • 214 views
Question. How would one record transcript-based audio with body tracking? Especially with body turning. With transcript-based lip sync, the visemes is based on the view (i.e. frontal, profile view etc) that the character is in. So if I'm recording body turning, how would one then record the transcript-based lip sync work? Because the mouths are different for the different views.
 
 
 
This topic has been closed for replies.

1 reply

Community Manager
June 20, 2023

One example that might be helpful is the built in puppet Stardust (the unicorn). They have head turns (not body turns, but the principle is the same). Each one has its own group called "Horse Mouth" with a set of tagged mouths. This way, regardless of which head is currently active, there is a set of viseme tagged groups (the mouths also use cycles so each movement is animated) for the lip sync behavior to show in response to a track of visemes in the timeline. The transcript lip sync feature uses the additional input of a transcript, but otherwise works exactly the same in terms of which layers/groups it triggers in response to those visemes.


The screen shots show a couple of ways to find the mouths. You can twirl open the groups in the Lip Sync behavior to see which layers/groups it found and click on items in those lists to navigate to it in the hierarchy. You can also search for groups in the puppet panel to find them if you know the name. This also illustrates how helpful it is to use consistent naming across your views so it's easier to find them later!

Hopefully that helps,


Dan Tull
Adobe Character Animator Team