I used this video to set up my character layers in adobe AI: Getting Started in Adobe Character Animator CC - YouTube The mouth and head responds to movements, but the eyebrows and pupils don't move at all. I have everything tagged correctly and tried troubleshooting, but I can't figure out why they are unresponsive. I'll continue troubleshooting, but if someone could please help that would be great. I would also like the eye gaze to work with a mouse as well, but so far nothing is responding from webcam or mouse input.
Here is the character: Dropbox - Character - Joshua.ai
At work still, but I wrote this blog of the most common eye problems I have seen: https://extra-ordinary.tv/2018/04/21/debugging-character-animator-eyess/ - it might have some useful hints to help diagnose the problem.
The Eye Gaze behavior appears to be acting on eyes/brows in a view that was not visible by default. What tipped me off was the Views/Handles sections of the eye gaze behavior's properties panel in Rig mode were referring to the Joshua top level group, but the Joshua Face Right copy was the one that was visible.
I'm going to add a link to this post to an internal board where we keep track of rigging challenges that we'd like to either make easier to avoid or easier to diagnose.
Additional note: I'm not sure of your eventual plans, but one way to fix this is to tag the Left/Right/Frontal views, but you'll need to decide how you want those to be triggered (probably either via the head turner behavior or an explicit swap set with triggers for each view).
When the views are tagged, the Views/Handles section in the behavior properties panel will show the matches in each view for the different tagged parts.
Thank you. Simply tagging the view was all that I needed to do, but that never would have occurred to me. This was very helpful.
Glad to help, thanks for the example case to help us figure out how to make this stuff more automatic and/or easier to figure out in the future!
Hi Dan, some extra thoughts on rigging challenges in case useful.
I think one of the challenges with Rigging is the unpredictability of what gets bound to what. Some behaviors pick one set of layers and control that - so if they latch onto the wrong thing, its confusing. This combined with head turner profiles and walk profiles (which share the same profile tags) can mean it becomes quite confusing resulting in needing to add behaviors per profile - even though the settings are the same.
So I would suggest all behaviors control all things that match, never reuse the same tag for two different meanings (like walk and head profiles do today), and get behaviors to play together better (e.g. I think mouth behavior turns off parallax effect on mouth, etc).
The goal should be 99% of puppets should not require manual rigging for ”common” things (head turner, walk behaviors, draggers, etc). I would love to draw an invisible line in the artwork for sticks, a invisible dot for draggers/handles, etc. If you have a multi-profile, multi-head-turn puppet, having to do work per profile each time you need to re-rig it, its very easy to make a mistake ending up with strange results that are hard to diagnose without a copy of the puppet. I would rather have as many behaviors at the root of the puppet as possible, so when significant artwork restructuring occurs (e.g. introduce a new layer over existing layers) I don’t lose all my settings for gravity etc I had on hair dangles.
If you got really funky, let me name a layer “dangle @Physics(gravity=20%)” or something like that so I can encode behaviors directly into layer names.