Open for Voting
Kling Video Generation model should include option to upload reference video to control motion.
As a video editor/creative director, I am requesting the ability to upload a reference video (driving video) to control the motion of the generated AI output in the Kling model.
- The Goal: I want to upload a static character image (or image-to-video) and a separate video file that dictates the motion (e.g., a person walking, dancing, or gesturing).
- The Feature Needed: A dedicated "Motion Reference" upload slot that extracts body dynamics, hand gestures, and facial expressions from the reference video and applies them to the generated character, similar to a "motion transfer" or simplified motion capture workflow.
- Why this matters: Currently, text-to-video prompting is too imprecise for complex choreography or specific performance acting. A motion reference option would enable:
- Precise, believable physical movement.
- Consistent performance across multiple clips.
- A significant reduction in prompt editing time.
Workflow Example:
- Upload Character Image A.
- Upload Reference Video B (person walking).
- Model generates Character A walking exactly like Video B.
