StreamingTalker: Audio-driven 3D Facial Animation with Autoregressive Diffusion Model
By: Yifan Yang , Zhi Cen , Sida Peng and more
Potential Business Impact:
Makes talking faces move in real-time.
This paper focuses on the task of speech-driven 3D facial animation, which aims to generate realistic and synchronized facial motions driven by speech inputs.Recent methods have employed audio-conditioned diffusion models for 3D facial animation, achieving impressive results in generating expressive and natural animations.However, these methods process the whole audio sequences in a single pass, which poses two major challenges: they tend to perform poorly when handling audio sequences that exceed the training horizon and will suffer from significant latency when processing long audio inputs. To address these limitations, we propose a novel autoregressive diffusion model that processes input audio in a streaming manner. This design ensures flexibility with varying audio lengths and achieves low latency independent of audio duration. Specifically, we select a limited number of past frames as historical motion context and combine them with the audio input to create a dynamic condition. This condition guides the diffusion process to iteratively generate facial motion frames, enabling real-time synthesis with high-quality results. Additionally, we implemented a real-time interactive demo, highlighting the effectiveness and efficiency of our approach. We will release the code at https://zju3dv.github.io/StreamingTalker/.
Similar Papers
StreamingTalker: Audio-driven 3D Facial Animation with Autoregressive Diffusion Model
CV and Pattern Recognition
Makes computer faces talk in real-time.
TalkingMachines: Real-Time Audio-Driven FaceTime-Style Video via Autoregressive Diffusion Models
Sound
Makes cartoon characters talk and move like real people.
Audio Driven Real-Time Facial Animation for Social Telepresence
Graphics
Makes virtual faces talk and move like real people.