StreamAvatar: Streaming Diffusion Models for Real-Time Interactive Human Avatars
By: Zhiyao Sun , Ziqiao Peng , Yifeng Ma and more
Potential Business Impact:
Makes digital people move and talk live.
Real-time, streaming interactive avatars represent a critical yet challenging goal in digital human research. Although diffusion-based human avatar generation methods achieve remarkable success, their non-causal architecture and high computational costs make them unsuitable for streaming. Moreover, existing interactive approaches are typically limited to head-and-shoulder region, limiting their ability to produce gestures and body motions. To address these challenges, we propose a two-stage autoregressive adaptation and acceleration framework that applies autoregressive distillation and adversarial refinement to adapt a high-fidelity human video diffusion model for real-time, interactive streaming. To ensure long-term stability and consistency, we introduce three key components: a Reference Sink, a Reference-Anchored Positional Re-encoding (RAPR) strategy, and a Consistency-Aware Discriminator. Building on this framework, we develop a one-shot, interactive, human avatar model capable of generating both natural talking and listening behaviors with coherent gestures. Extensive experiments demonstrate that our method achieves state-of-the-art performance, surpassing existing approaches in generation quality, real-time efficiency, and interaction naturalness. Project page: https://streamavatar.github.io .
Similar Papers
Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length
CV and Pattern Recognition
Makes talking avatars move instantly.
Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length
CV and Pattern Recognition
Makes cartoon characters talk and move instantly.
StreamingTalker: Audio-driven 3D Facial Animation with Autoregressive Diffusion Model
CV and Pattern Recognition
Makes talking faces move in real-time.