InfiniteTalk: Audio-driven Video Generation for Sparse-Frame Video Dubbing
By: Shaoshu Yang , Zhe Kong , Feng Gao and more
Potential Business Impact:
Makes videos match talking perfectly, head to toe.
Recent breakthroughs in video AIGC have ushered in a transformative era for audio-driven human animation. However, conventional video dubbing techniques remain constrained to mouth region editing, resulting in discordant facial expressions and body gestures that compromise viewer immersion. To overcome this limitation, we introduce sparse-frame video dubbing, a novel paradigm that strategically preserves reference keyframes to maintain identity, iconic gestures, and camera trajectories while enabling holistic, audio-synchronized full-body motion editing. Through critical analysis, we identify why naive image-to-video models fail in this task, particularly their inability to achieve adaptive conditioning. Addressing this, we propose InfiniteTalk, a streaming audio-driven generator designed for infinite-length long sequence dubbing. This architecture leverages temporal context frames for seamless inter-chunk transitions and incorporates a simple yet effective sampling strategy that optimizes control strength via fine-grained reference frame positioning. Comprehensive evaluations on HDTF, CelebV-HQ, and EMTD datasets demonstrate state-of-the-art performance. Quantitative metrics confirm superior visual realism, emotional coherence, and full-body motion synchronization.
Similar Papers
StableAvatar: Infinite-Length Audio-Driven Avatar Video Generation
CV and Pattern Recognition
Makes talking cartoon characters that look real.
InfinityHuman: Towards Long-Term Audio-Driven Human
CV and Pattern Recognition
Makes talking people in videos look real.
IMTalker: Efficient Audio-driven Talking Face Generation with Implicit Motion Transfer
CV and Pattern Recognition
Makes faces talk realistically from pictures.