InfinityHuman: Towards Long-Term Audio-Driven Human
By: Xiaodi Li , Pan Xie , Yi Ren and more
Potential Business Impact:
Makes talking people in videos look real.
Audio-driven human animation has attracted wide attention thanks to its practical applications. However, critical challenges remain in generating high-resolution, long-duration videos with consistent appearance and natural hand motions. Existing methods extend videos using overlapping motion frames but suffer from error accumulation, leading to identity drift, color shifts, and scene instability. Additionally, hand movements are poorly modeled, resulting in noticeable distortions and misalignment with the audio. In this work, we propose InfinityHuman, a coarse-to-fine framework that first generates audio-synchronized representations, then progressively refines them into high-resolution, long-duration videos using a pose-guided refiner. Since pose sequences are decoupled from appearance and resist temporal degradation, our pose-guided refiner employs stable poses and the initial frame as a visual anchor to reduce drift and improve lip synchronization. Moreover, to enhance semantic accuracy and gesture realism, we introduce a hand-specific reward mechanism trained with high-quality hand motion data. Experiments on the EMTD and HDTF datasets show that InfinityHuman achieves state-of-the-art performance in video quality, identity preservation, hand accuracy, and lip-sync. Ablation studies further confirm the effectiveness of each module. Code will be made public.
Similar Papers
InfiniHuman: Infinite 3D Human Creation with Precise Control
CV and Pattern Recognition
Creates endless, realistic 3D people for games.
StableAvatar: Infinite-Length Audio-Driven Avatar Video Generation
CV and Pattern Recognition
Makes talking cartoon characters that look real.
InfiniteTalk: Audio-driven Video Generation for Sparse-Frame Video Dubbing
CV and Pattern Recognition
Makes videos match talking perfectly, head to toe.