Is there really still no way to pre-record a video of someone talking and then feed that into Character Animator to do it's thing, just as it would on a live camera feed?? This seems an absurd ommission, especially since I've seen questions asking for this pretty much since Character Animator was in Beta. Honestly, why would the application care whether the source video is live or a pre-recorded clip?
I need to animate a character based on someone I have limited access to. I need to just go to where they are, record a nice, well-lit, close-up video of them with a decent mic, and then get back to my office and let Character Animator work on it. It seems nuts that this workflow apparently hasn't been considered by the devs.
Title edited by: Mod
I understand your point. You can control the puppet using behaviours like pre-recorded voice overs.
Check out this article: https://helpx.adobe.com/in/adobe-character-animator/using/behavior.html#Lip_Sync
I'm sure it will help.
Let us know if you have any additional question.
That just looks like a kludge for the audio. It's not even clear from that document whether CH actually attempts to analyse the spoken words or simply reacts to the audio amplitude. Also, it doesn't deal with video *at all*. How does this help? Again, why is this so difficult?
There is no support for pre-recorded video unfortunately in CH. Yes, it would be really useful - the ability to move the playhead and see the right point in the video file. It would help sync the puppet movements to the video (image and audio). There was a feature request for it you can vote for, but the forums changed around recently - not sure if the voting system changed as well. As Shivangi said you can import an *audio* file (just import it). But not video.
However, you can export a video from CH with transparency then use Prem Pro or After Effects to superimpose it over the original video. You can also use “dynamic linking” to include the CH scene directly into Prem Pro / After Effects so updates in CH are seen near immediately in PP/AE. My little computer is a little sluggish doing this, but it might be the closest to what you are after - you should be able to move the playhead in the video editing software and it will show that point of the CH scene superimposed over the top of the video.
Thanks, but simply having video in CH or any of the export actions you mention doesn't help. I just want CH to take a video of someone talking, and track their face, sync their voice, and animate the puppet character just as it would with a live video stream. I don't understand why this is apparently so difficult.
I completely agree.
Did this ever get resolved?
Oh, and re-reading I think I misunderstood what you are trying to do. Are you trying to use a prerecorded video instead of a live web camera, so face tracking and audio (lipsync) comes from there. My other responses were off target then - Oops! Lol!
I do recall someone using some kind of virtual camera ... manycam I think it was called. I have not tried that myself though. Just wondering if it can play a video and make it a video source for the camera input. The NDI suite of tools might have something as well. Just making suggestions if any use, and yes would involve some messing around. Also I notice there is “OBS Virtualcam” which seems to be a plugin for OBS that makes the OBS output a camera source that I assume CH can then watch.
Lipsync is from sound spectral analysis (not volume), except “surprised” and “smile” which use the video stream. I don’t know the algorithm but there are too many visemes for it to be audio levels. There is also the nutcracker jaw which uses video instead to work out how far down to move the jaw (but you don’t get the same mouth shapes).
Oh! I'm sorry I didn't see it. This is explained already by alank99101739.