It would be useful if Firefly video can understand how visualise a character talking beyond randon mouth movements.
I have tried something like:
<character> saying "hello"
Firefly kindof understands it with a human character, showing the character waving a hand, no mouth movement. With an animal, there is no indication of it greeting the viewer.
When I use "taking" instead of "saying <something>", I get mixed results, sometimes mouth moving, mostly nothing to indicate speech.