ActAvatar: Temporally-Aware Precise Action Control for Talking Avatars
By: Ziqiao Peng , Yi Chen , Yifeng Ma and more
Despite significant advances in talking avatar generation, existing methods face critical challenges: insufficient text-following capability for diverse actions, lack of temporal alignment between actions and audio content, and dependency on additional control signals such as pose skeletons. We present ActAvatar, a framework that achieves phase-level precision in action control through textual guidance by capturing both action semantics and temporal context. Our approach introduces three core innovations: (1) Phase-Aware Cross-Attention (PACA), which decomposes prompts into a global base block and temporally-anchored phase blocks, enabling the model to concentrate on phase-relevant tokens for precise temporal-semantic alignment; (2) Progressive Audio-Visual Alignment, which aligns modality influence with the hierarchical feature learning process-early layers prioritize text for establishing action structure while deeper layers emphasize audio for refining lip movements, preventing modality interference; (3) A two-stage training strategy that first establishes robust audio-visual correspondence on diverse data, then injects action control through fine-tuning on structured annotations, maintaining both audio-visual alignment and the model's text-following capabilities. Extensive experiments demonstrate that ActAvatar significantly outperforms state-of-the-art methods in both action control and visual quality.
Similar Papers
AVATAAR: Agentic Video Answering via Temporal Adaptive Alignment and Reasoning
CV and Pattern Recognition
Helps computers understand long videos better.
KlingAvatar 2.0 Technical Report
CV and Pattern Recognition
Makes long, clear videos from your words.
AVATAR: Reinforcement Learning to See, Hear, and Reason Over Video
CV and Pattern Recognition
Helps robots understand videos by watching and listening.