We have a brand new look! Take a tour with us and explore the latest updates on Adobe Support Community.
Using the power of Adobe Sensei you can create automatic animation from speech. Character Animator will generate head and eyebrow movements corresponding to a recorded voice.
Note: On macOS, 10.15 or later is required.
Try out Speech-Aware Animation by importing an audio file and adding it to a scene with your puppet, then choosing Timeline > Compute Head Movement from Speech. A Camera Input take for the puppet's Face and/or Head Turner behavior (whichever ones are currently being used on the puppet) will be created with the automatically generated head movement.
Exaggerate or minimize the computed head movement by adjusting the following parameters in the Properties panel:
Head Turner behavior:
The computed head movement can still be blended with manual Face and Head Turner recordings for more refinement.
Note: The Compute Head Movement from Speech does not change mouth shapes, so you'll want to also use the Timeline > Compute Lip Sync Take from Scene Audio menu command to generate lip sync for your puppet.
(Use this Beta forum thread to discuss this feature and share your feedback with the Character Animator team and other Beta users. If you encounter a bug while using this feature, let us know by posting a reply here or choosing "Report a bug" from the "Provide feedback" icon in the top-right corner of the app.)
Oh my!!! I really need thisss!!!!! 😄 😄 😄
Is it possible to have the auto head movement from this new feature to move like South Park head movement? I really need it:(
So far, I am doing it manually with keyframe. It's a lot of work for a single animator like me:(
I'm not sure which elements of the South Park style you're going for, but here a few thoughts:
Hey there! Thank you so much for replying.
I haven't tried exactly like you suggested, but I have tried something similiar before. I played with the face behavior and use the webcam to achieve that South Park style head movement, but it wasn't good.
I upload a short cut of my animation on Youtube for you to understand what I mean exactly.
First day checking out the Beta and so far it looks GREAT ! I look forward to helping in the development. I am a Dinosaur in Computers (Wrote 1st prog in 1978) I have both Windows and Mac Platforms and a little SQA background. Do you anticipate adding .puppet files to Adobe Cloud Libraries as ASSETS in the future ? I apologize for not RTFM about features yet. Congrats on the amazing product!
Not sure about any current plans. I know when you upload a .puppet to Google Drive, it usually sees it as a .zip file and will show its contents (chproj file and additional folders). I think we all hope that .puppet files will be made more standard in the future, and would love to see easier ways for people to browse and access puppets.
That's awesome! I just tried it and the Pose-to-Pose doesn't seem to to work. Maybe I'm doing something wrong.
Pose to pose is unfortunately not currently in this version. Is that a dealbreaker for you? We'd love to hear your thoughts why or why not.
Yes, for me it is a dealbreaker, since i usually do my animations with pose to pose. (I feel like sometimes I am not notified, when someone replies to a thread i am in 🤔)
can this be implimented for stream from audio and not camera?
Currently this only works as a post-processing feature, so it will only work with an existing audio file in your timeline, not live. But if you have a use case where you would want it live and not want the camera, we'd love to hear about it.
my thinking is that it'd be less for the host machine to do live. i'm assuming that animating from the live audio would be less cpu than if it's streaming and having to access the webcam and derive the motion from tracking.
i use this for live webinars and char animator is a big footprint utilizing 40-50+% cpu on my 24core and thats at sd frame size 15fp while it has to run obs/virutal mixer/vst inserts/zoom/ppt/and the rest.
ive got a 2020nvidia, seems char animator wants half of it and half (at least) of cpu.
a short example. for the live stuff he's on all the time and talks to the participants and host.
also live with webcam for head track the puppet seems real confused and can jump wildly due to my glasses and the frequent times it loses face. i could just make him head drift but its way more personal with the tracked motion.
yes, if it can be done; lighten the load using audio for head movement live, no camera or visual tracking=less cpu,
allows for higher frame rate live outputs without impacting other apps running
i found this program and put it directly to use on my web stuff (with one of your free puppets). its a really simple and easy way to elevate a production - i understand it's way more intricate than how i'm using it but quick and simple to turn vo into a messenger.
That's great info and a solid use case, thank you so much!
The Ch team would like to get some feedback on this Compute Head Movement from Speech. Have you used it? If not, why? What do you think of it? Etc. Please take the survey. You will not be asked for any personal data. The survey takes approximately 2 minutes. And at the end, there is a chance to tell the Ch team anything you want. Thanks! https://www.surveymonkey.com/r/ChComputeHead
Hey new to character animate, and i use the lip sync and the auto head movement. i have a podcast so i have to characters in my animation that i auto lip sync and then auto head move. The whole thing is over an hour long, and when i apply the auto head movement to each character its nearly 2 hours to apply (4 in total) but on each character when i play it back it only seems to work for about 11 minutes and then stops. Does this sound right or im i missing something, because if it takes that long just to apply it for 11 minutes of footage it seems extreme. Any advice would be appreciated.
Hmm, is both the facial/puppet movement stopping or just the audio, or both?
Hey, it's just the facial and head movements that stop working. The autohead movement does it job for 11 minutes then stops, now it took nearly two hours for one puppet just to even apply this. Now maybe no one is doing it on a scene this long, but as I said it's for a podcast show which is roughly just over an hour long.
Also side note if you or anyone can confirm, is the total duration of the scene only 60 minutes?
I tried a ~20 minute clip and it was fine for me. Yes, it currently does take awhile to process; we are investigating ways to improve the performance.
I'm trying a sequence of clips to get it over an hour to see if I can repro. My tests were on Windows 10. What OS version are you on?
Also, just curious... if you use the Compute Lip Sync Take from Scene Audio command, is it also failing at around 11 minutes?
If you split the puppet track into separate sections (e.g., 15-minute sections) and processed each section separately, does it go beyond 11 minutes?
Yes, our current maximum scene duration is 60 minutes.
I have the same issue, with clips over 10 minutes. Head and facial movements stop around 8 minutes in and processing lasts longer than back a month or so ago. I love the feature very much and I'd really like to see this fixed. Thanks, you'all!
Hmm, just curious if you're getting the same issue with the latest Beta build of Character Animator, which you can access from the "Beta apps" section of the Creative Cloud desktop app.
If you are still getting it with the latest Beta build, can you provide more information about your computer setup, such as OS version number and RAM? Also, if it's an audio file that you are willing to share with us to diagnose, feel free to send a direct message with a shared link via Google Drive, Dropbox, etc. Thanks.
Hi all... If you've previously tried Speech-Aware Animation (Compute Head Movement from Speech), we've made some notable speed improvements in the latest Character Animator (Beta) builds for Version 4.4 that we'd like for you to try. Let us know if your thoughts on the speed improvements.
Note: On some systems, the progress dialog box might stay at or near 100% complete but not seemingly close the progress dialog box for awhile. We're aware of this issue, and will have it addressed in a future Version 4.4 Beta build.
This is happening right now - whenever I use the "compute head movement from speech for my puppets, it ALWAYS makes their eyebrows into the "angry" setting (a!). Why?
Are the eyebrows independent layers (i.e., do they have a crown icon on them so that they can move freely)? I'm wondering if they're being restricted from movement, causing them to warp? I wouldn't think it'd trigger artwork swaps (e.g., an angry artwork layer).
HiJeff - yes, crowns are on the "eye" group and its layers too. The "angry" eyebrows appear only when the puppet is not talking. When there is speech, the effect seems normal - then back to angry eyebrows when he's done talking. Would that help to see what's going on? Screen video attached... thanks much - Bob