CA has come on a lot since I last had a look, so look forward to using it more this time.
In Puppet mode, you get to see Behaviors and by making adjustments here you can improve the reaction of different parts such as, the tilt of the head, the eyebrow strength etc. Now, I've tried the, Mouth Strength here, but it seems to affect the size of the mouth, which is ok, but what I want to know is can you adjust the sensitivity of the mouth, in relationship to the mapping of the mouth in real speech? It seems that there is a mismatch between the live mouth movements and the puppets mouth responce?
I just want to get a better match between my mouth movement, which is tracked well by the camera, and the actual puppet's mouth movements. Hope that makes sense?
I did try the only option in, Lip Synce, the Jaw Movement, but this didn't seem to make any difference?
I'm disasbled, deaf and now diagnosed with dementia. I am intending to try and use my character to talk for me, online. https://toolate.blog/
Oh, wow. I love what you're doing here in sharing your journey and turning some of these challenges into an opportunity to connect by sharing them and yourself with others. This is the sort of project that I find very inspiring.
The Jaw Movement option on lip sync only has an effect if you have a handle tagged Jaw. It just applies a bit of up and down movement based on the chosen visemes.
We have talked about having a morphed mouth based on the camera tracking, actually, which sounds a bit like what you want here, but it doesn't exist yet.
There are also some options in Preferences for tweaking how visemes are selected by the Lip Sync behavior, though it is still audio based, they're just tuning parameters and don't fundamentally change the kind of movement it produces.
One other thing I have seen a time or two is applying a nutcracker jaw behavior to the face (with a Jaw handle for it to move) along with Lip Sync. It can help a bit with adding some more natural movement to the face/mouth. (see Tull the cat for an example)
Another route people sometimes go is to have the tagged "mouth" layers also include the jawline so there's more natural face movement, but that latter one requires the Lip Sync behavior's viseme selection to be pretty good to get good results or you can manually edit the results to get a better match, though that's quite a bit more labor intensive. (the Wizard goes even further with this to good effect, note that it replaces the cheeks and forehead as well, so the whole face looks more animated than it would with simple swaps of just the mouth parts).
Anyway, just some more ideas that might be helpful.