Skip to main content
Participating Frequently
August 13, 2020
Question

Calibrate or improve compute lip sync from audio

  • August 13, 2020
  • 1 reply
  • 1768 views

Hello,

 

Does anyone know of a way to calibrate or improve the "Compute lip sync take from scene audio" I'm working with a puppet and after syncing the audio the vismeme's are very inaccurate. I have to completely re-do them which is incredibly time consudming. The audio was taken from a Zoom H6 and is clear without any background noise.

 

Thanks in advance!

This topic has been closed for replies.

1 reply

keiko.yamada
Community Manager
Community Manager
August 31, 2020

Hi,

 

How about tweaking Viseme Detection slider in Preferences -> Lip Sync?

Also, you can switch Lip Sync engine version in Preferences -> Lip Sync -> Version, if the result from the older version of CH with your audio file was better.

 

Hope this helps.

Participating Frequently
September 3, 2020

Thanks for getting back. I'll give those a try.

 

Also do you know if there is a way to replace the vismeme "M" with lets say "Uh"? I cant find out how if so.

 

Participating Frequently
September 18, 2020

One approach is to use "edit original" to open up the artwork file (photoshop or illustrator) and then find the viseme layers in the artwork. To make a layer be both "m" and "uh" rename the layer/group to be "m uh". Then find the other layer you don't want and delete it (or rename it to "unwanted" - Just make sure it does not have the word "m" or "uh" in it.) Save the file and go back the Character Animator.

 

What should happen is the renamed layer will get both the M and Uh tags added (check in the panel to the right that shows the tags and select teh "m uh" layer). A layer can have multiple tags - and if you separate the tag names with spaces i think it will add them all.


that worked! thanks for the walkthough on doing this. I'll see if this will work out for a permanent/long term solution.