The problem i'm having right now with creating different emotion sets (sad, confused, happy etc.) is that it's something very basic and important, yet time consuming and complicated to do on a live performance.
my idea is:
if we could somehow for example, connect between the "smile" behavior that the camera tracks from the face, and connect it to other elements created for a smile set like eyebrows, eyes, cheeks etc. which will be triggered automatically.
if you make a sad/smile/surprised face, the camera detects it and automatically triggers sad eyes, eyebrows, maybe even body poses!
basic emotions that are very important for the performance, automatically by the camera, without needing to create complex sets of triggers, key binds etc.
Just a thought 🙂