Copy link to clipboard
Copied
Hi there!
I'm really enjoying a lot of the new features introduced in 2.1 but, for me, the live lip sync is progressively getting worse with each update. Is there a specific file I can copy over from a previous version of Character Animator that will return the lip sync to how it was in a previous build, without losing the new features introduced in 2.1?
Thanks
So, I did try an experiment to transplant just the lip sync data resources (files of numbers that partly define the output of the algorithm) from 2.0 to 2.1, but the results it produces different from either 2.0 or 2.1, so that's not enough.
If you're curious you can try the same transplant (if you look in the app resources, they're all together in a folder) to see if it is better (the samples I was looking at were in a language I don't know, so I couldn't make much of an evaluation).
I don't know
...Copy link to clipboard
Copied
There is a feature request area at the top the forums.
DanTull​
Copy link to clipboard
Copied
Do you have an example audio clip that seems to have worse results in 2.1 than 2.0 (or earlier) for reference?
DT
Copy link to clipboard
Copied
I'm afraid I don't, since I use Character Animator primarily to broadcast live on Twitch and have reverted back to ver 2. Certain visemes were hanging a little too long, or not appearing when they ought to, and it felt as though any slight noise was producing mouth movements, which isn't ideal with my setup as it was reacting to the sounds of my gamepad when it previously wasn't (although I imagine that can be improved on my end by reconfiguring my mic setup!)
The extension that has been built for Twitch in particular looks fantastic, and it's always exciting to see how the team is improving things. I don't know if any kind of configuration tool is forthcoming to further customise the lip sync feature to each user's setup, but something of that sort would be very much welcome on my end.
Alternatively, if anyone knows how to import the way lip sync works in ver 2 over to 2.1 I would love to hear your advice.
Copy link to clipboard
Copied
So, I did try an experiment to transplant just the lip sync data resources (files of numbers that partly define the output of the algorithm) from 2.0 to 2.1, but the results it produces different from either 2.0 or 2.1, so that's not enough.
If you're curious you can try the same transplant (if you look in the app resources, they're all together in a folder) to see if it is better (the samples I was looking at were in a language I don't know, so I couldn't make much of an evaluation).
I don't know off hand if you'll find the behavior better or worse than 2.0, but figured I'd at least share the results.
DT
Copy link to clipboard
Copied
Thanks for looking into it, I appreciate it! I'll give that a go and see what the results are like
Copy link to clipboard
Copied
How can i send you audio samples? dtull-adobe​
Copy link to clipboard
Copied
I usually just put them in a file service (Dropbox. Google Drive, OneDrive, Adobe CC FIles, etc) and then send a download link via a private message.
Copy link to clipboard
Copied
Is the stream still available online? Might be able to strip the audio from that...
I was wondering if issues like 2.1 being a bit slower causing lag.
From what you say I am guessing the issue is not the audio levels not being the same - but if going from v2 to v2.1 made it worse then going back to v2 helped, that *probably* indicates it is less likely to be the audio set up... (a guess)