JoyAvatar: Real-time and Infinite Audio-Driven Avatar Generation with Autoregressive Diffusion
By: Chaochao Li , Ruikui Wang , Liangbo Zhou and more
Potential Business Impact:
Makes talking cartoon characters move and talk.
Existing DiT-based audio-driven avatar generation methods have achieved considerable progress, yet their broader application is constrained by limitations such as high computational overhead and the inability to synthesize long-duration videos. Autoregressive methods address this problem by applying block-wise autoregressive diffusion methods. However, these methods suffer from the problem of error accumulation and quality degradation. To address this, we propose JoyAvatar, an audio-driven autoregressive model capable of real-time inference and infinite-length video generation with the following contributions: (1) Progressive Step Bootstrapping (PSB), which allocates more denoising steps to initial frames to stabilize generation and reduce error accumulation; (2) Motion Condition Injection (MCI), enhancing temporal coherence by injecting noise-corrupted previous frames as motion condition; and (3) Unbounded RoPE via Cache-Resetting (URCR), enabling infinite-length generation through dynamic positional encoding. Our 1.3B-parameter causal model achieves 16 FPS on a single GPU and achieves competitive results in visual quality, temporal consistency, and lip synchronization.
Similar Papers
Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length
CV and Pattern Recognition
Makes talking avatars move instantly.
Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length
CV and Pattern Recognition
Makes cartoon characters talk and move instantly.
StableAvatar: Infinite-Length Audio-Driven Avatar Video Generation
CV and Pattern Recognition
Makes talking cartoon characters that look real.