Copy link to clipboard
Copied
Hi, due to Covid-19 we have a situation whereby we have to work with pre-recorded talking heads for Characterizer. I'm wondering whether it's possible to feed this footage into Characterizer so that it allows the software to extract the relevant frames as per this video: https://youtu.be/z02AcZhxSfs?t=265 (assuming of course that we got the speakers to film the relevant expressions).
Also, is it possible to incorporate more visemes to allow for a more nuanced puppet?
Many thanks.
Copy link to clipboard
Copied
You cannot add visemes. Its hard coded into the software. (You can use your own triggers and do them by hand if needed. You can also use cycle layers to make some mouth positions animate slightly).
I don't recall if you can feed videos in. You might be able to use a virtual webcam to read from a file instead of a camera, but I thought it told you to repeat an expression until it got it right, so not sure how well pre-recorded would work. But I don't use Characterizer very often.
The other approach is to take frames from the video by hand and build a puppet up that way (not using Characterizer), but you don't get all the nifty effects then.
Copy link to clipboard
Copied
Ok, thanks for the info.