Copy link to clipboard
Copied
Hey friends .... I don't know what has changed in Ch or Ae with the CC2018 roll-outs. But I'm in a hole.
Please correct me if I'm missing something ..... but I'm syncing up the "Cat" puppet from Adobe (thought I'd use one of yours), in a new project and popped in a 5 second .WAV voice.
I did a "Lip-Sync", and the Ch produced a layer with the visemes - all lined up and looking good.
Should I not be able to tap "Play", to preview the lip-sync result in the scene? It seems I've been to do this as I can extend some of the mouth shapes (or shorten them) as needed to better match the faces and expressions of the characters.
And even more, perhaps, is .... how else can I verify that the puppet's mouth shapes will render out correctly?
Man .... I'm dead in the water, and about to miss some important deadlines ....
SOS ..... Please - somebody!
Wait a sec -- do you mean that cat with the wagging tail? I think that one uses nutcracker jaw instead of mouth shapes, so it won't work with compute lip sync. Does it work with Chloe (importable via the Start workspace)?
Copy link to clipboard
Copied
I just tried to do the same thing on my laptop, using a just installed Ch CC2018. I just installed brought into a new project the Ch "Cat" puppet.
It responds to my face & body movements, blinks, and microphone-inputs (mouth is working fine).
When I import my voice WAV file, I'm dropping it on the timeline, shutting off the Mic, and am invoking the "Compute Lipsync from scene audio".
Ch produces the output, and everything looks fine.
MY QUESTION: Can I scrub through the timeline, or tap the "Play" control to preview the mouth movements while I plan my blinks and expression changes before recording?
It looks like that is the case when i watch Dave's Tuts .....
HELP PLEASE!
Copy link to clipboard
Copied
Sorry for not replying in your other thread. I must have scanned over it before and then was going to reply and saw this one.
Can you post a video? It seems from the description you've got the right steps (though I don't think disabling the mic is needed), but maybe seeing it would help me spot the catch that's hanging you up.
Thanks - DT
Copy link to clipboard
Copied
Wait a sec -- do you mean that cat with the wagging tail? I think that one uses nutcracker jaw instead of mouth shapes, so it won't work with compute lip sync. Does it work with Chloe (importable via the Start workspace)?
Copy link to clipboard
Copied
OK! Thanks Dan. Got it. This is working with my puppets.
So now just one more thing:
In the scene, my Lip-Sync is working great, but when I play the timeline with the mouth movements should I not be able to also move around and do a handle movement (one hand for instance) - while the mouths are following the visemes ...?
Thanks for the help!
Copy link to clipboard
Copied
If you disarm the lip sync behavior (with the red dot in properties panel) and have your dragger behavior armed, you can make recordings of the arm movements and you'll be able to see the lip sync playing back as you do so. If it is important to line up the movements and sounds, you may need to nudge things after the recording. It may also help to use the little text control to the right of the record button which will allow you to slow down the recording speed (existing recordings will play back at that speed, too, making it easier to keep up and respond in time).
You can also rehearse the movements when paused and if lip sync is disarmed, the mouth at that exact time will be shown, but if you are actually playing, it won't let you drag things around (it only plays back what it has been recorded).
Hopefully that makes sense.
DT
Copy link to clipboard
Copied
A simpler summary of the rules:
- When playing (or scrubbing) you see previously recorded data.
- When stopped/recording you see the response of behaviors that are armed (red circle in the properties panel) to their inputs: mouse, keyboard, microphone, camera, etc and behaviors that are disarmed (empty circle) show their recorded state (showing a frozen point in time if stopped or playing it back if recording).
You may also find if you're trying to time some motions to the talking that it is easier to do different parts of the motion separately and then move the takes to line them up and using take blending (or for triggers you can edit the take directly) to combine them.
DT
Copy link to clipboard
Copied
OK ... I think I got it. But just to be clear, let me ask please .... When the lip sync is done in the scene, then I can shut that control off and record all my draggables, etc., as I need ---- so that when I export the WAV and PNG sequence all will be there ...?
Thanks again for getting me on track, as i can't do without this great tool now ....
Copy link to clipboard
Copied
Yep. You can record all the different aspects of your performance separately and then Character Animator will combine them for playback and exporting.
bobokirk wrote
Thanks again for getting me on track, as i can't do without this great tool now ....
Sure thing. Happy to help!
DT
Copy link to clipboard
Copied
One more question .....
When I IMPORT the Ch produced PNG+WAV do I import as a "File"? It seems the BETA versions had a distinct import item for this specific kind of Ch Import .....
Copy link to clipboard
Copied
bobokirk wrote
When I IMPORT the Ch produced PNG+WAV do I import as a "File"? It seems the BETA versions had a distinct import item for this specific kind of Ch Import .....
Do you mean when importing the PNG+WAV into After Effects? If so, I think this is what you want.
FAQ: Importing exported PNG Sequences and WAV files from Character Animator into After Effects
I'm not an After Effects expert, so the only way I've imported Character Animator outputs into Ae is via dynamic link, but hopefully that FAQ will let you do it the way you were doing it before since that's familiar.
DT