Score: 1

StreamAvatar: Streaming Diffusion Models for Real-Time Interactive Human Avatars

Published: December 26, 2025 | arXiv ID: 2512.22065v1

By: Zhiyao Sun , Ziqiao Peng , Yifeng Ma and more

Potential Business Impact:

Makes digital people move and talk live.

Business Areas:
Video Streaming Content and Publishing, Media and Entertainment, Video

Real-time, streaming interactive avatars represent a critical yet challenging goal in digital human research. Although diffusion-based human avatar generation methods achieve remarkable success, their non-causal architecture and high computational costs make them unsuitable for streaming. Moreover, existing interactive approaches are typically limited to head-and-shoulder region, limiting their ability to produce gestures and body motions. To address these challenges, we propose a two-stage autoregressive adaptation and acceleration framework that applies autoregressive distillation and adversarial refinement to adapt a high-fidelity human video diffusion model for real-time, interactive streaming. To ensure long-term stability and consistency, we introduce three key components: a Reference Sink, a Reference-Anchored Positional Re-encoding (RAPR) strategy, and a Consistency-Aware Discriminator. Building on this framework, we develop a one-shot, interactive, human avatar model capable of generating both natural talking and listening behaviors with coherent gestures. Extensive experiments demonstrate that our method achieves state-of-the-art performance, surpassing existing approaches in generation quality, real-time efficiency, and interaction naturalness. Project page: https://streamavatar.github.io .

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition