Skip to main content
Participant
September 11, 2023

Lip Sync From Scene Audio & Transcript computes visemes, but does not physically show mouth movement

  • September 11, 2023
  • 1 reply
  • 150 views

I have computed lip sync from scene audio and trancript with the proper format and files, and visemes are produced, but no physcial mouth shapes appear at all. I am on version 23.6 Mac OS. 

 

I have tried with stock puppets and it works well. When I am using an imported puppet, the visemes are computed, but for the playback there are no mouth movements or shapes. The lip sync is armed. I have troubleshooted by disarming everything except lip sync, this still has not worked. I have re imported the puppet, and this still does not fix the problem. Computing 3 takes from scene audio works, but the results are very poor for the lip sync.

 

Any advice would be greatly appreciated for troubleshooting. I am happy to send the file over for someone to help me take a look at. 

1 reply

Participant
September 12, 2023

I have figured out it has to do with two mouth picker layers: happy and sad. When the lip sync is computed on its own, visemes are computed but no mouth movement is shown. When the layer pickers are also computed, mouth movement appears, but is very innacurate. It seems like the only layer which influences mouth movement is the layer picker for 'happy mouth'. Meaning computing a lip sync from audio and transcript may not be possible. Although I am curious if anyone in the community has suggestions?

 

Thanks