Using the power of Adobe Sensei you can create automatic animation from speech. Character Animator will generate head and eyebrow movements corresponding to a recorded voice.
Note: On macOS, 10.15 or later is required.
Try out Speech-Aware Animation by importing an audio file and adding it to a scene with your puppet, then choosing Timeline > Compute Head Movement from Speech. A Camera Input take for the puppet's Face and/or Head Turner behavior (whichever ones are currently being used on the puppet) will be created with the automatically generated head movement.
Exaggerate or minimize the computed head movement by adjusting the following parameters in the Properties panel:
Head Turner behavior:
The computed head movement can still be blended with manual Face and Head Turner recordings for more refinement.
Note: The Compute Head Movement from Speech does not change mouth shapes, so you'll want to also use the Timeline > Compute Lip Sync Take from Scene Audio menu command to generate lip sync for your puppet.
(Use this Beta forum thread to discuss this feature and share your feedback with the Character Animator team and other Beta users. If you encounter a bug while using this feature, let us know by posting a reply here or choosing "Report a bug" from the "Provide feedback" icon in the top-right corner of the app.)
Oh my!!! I really need thisss!!!!! 😄 😄 😄
Is it possible to have the auto head movement from this new feature to move like South Park head movement? I really need it:(
So far, I am doing it manually with keyframe. It's a lot of work for a single animator like me:(
I'm not sure which elements of the South Park style you're going for, but here a few thoughts:
Hey there! Thank you so much for replying.
I haven't tried exactly like you suggested, but I have tried something similiar before. I played with the face behavior and use the webcam to achieve that South Park style head movement, but it wasn't good.
I upload a short cut of my animation on Youtube for you to understand what I mean exactly.
First day checking out the Beta and so far it looks GREAT ! I look forward to helping in the development. I am a Dinosaur in Computers (Wrote 1st prog in 1978) I have both Windows and Mac Platforms and a little SQA background. Do you anticipate adding .puppet files to Adobe Cloud Libraries as ASSETS in the future ? I apologize for not RTFM about features yet. Congrats on the amazing product!
Not sure about any current plans. I know when you upload a .puppet to Google Drive, it usually sees it as a .zip file and will show its contents (chproj file and additional folders). I think we all hope that .puppet files will be made more standard in the future, and would love to see easier ways for people to browse and access puppets.
That's awesome! I just tried it and the Pose-to-Pose doesn't seem to to work. Maybe I'm doing something wrong.
Pose to pose is unfortunately not currently in this version. Is that a dealbreaker for you? We'd love to hear your thoughts why or why not.
can this be implimented for stream from audio and not camera?
Currently this only works as a post-processing feature, so it will only work with an existing audio file in your timeline, not live. But if you have a use case where you would want it live and not want the camera, we'd love to hear about it.
my thinking is that it'd be less for the host machine to do live. i'm assuming that animating from the live audio would be less cpu than if it's streaming and having to access the webcam and derive the motion from tracking.
i use this for live webinars and char animator is a big footprint utilizing 40-50+% cpu on my 24core and thats at sd frame size 15fp while it has to run obs/virutal mixer/vst inserts/zoom/ppt/and the rest.
ive got a 2020nvidia, seems char animator wants half of it and half (at least) of cpu.
a short example. for the live stuff he's on all the time and talks to the participants and host.
also live with webcam for head track the puppet seems real confused and can jump wildly due to my glasses and the frequent times it loses face. i could just make him head drift but its way more personal with the tracked motion.
yes, if it can be done; lighten the load using audio for head movement live, no camera or visual tracking=less cpu,
allows for higher frame rate live outputs without impacting other apps running
i found this program and put it directly to use on my web stuff (with one of your free puppets). its a really simple and easy way to elevate a production - i understand it's way more intricate than how i'm using it but quick and simple to turn vo into a messenger.
That's great info and a solid use case, thank you so much!