DyStream: Streaming Dyadic Talking Heads Generation via Flow Matching-based Autoregressive Model
By: Bohong Chen, Haiyang Liu
Potential Business Impact:
Makes talking videos that feel real and instant.
Generating realistic, dyadic talking head video requires ultra-low latency. Existing chunk-based methods require full non-causal context windows, introducing significant delays. This high latency critically prevents the immediate, non-verbal feedback required for a realistic listener. To address this, we present DyStream, a flow matching-based autoregressive model that could generate video in real-time from both speaker and listener audio. Our method contains two key designs: (1) we adopt a stream-friendly autoregressive framework with flow-matching heads for probabilistic modeling, and (2) We propose a causal encoder enhanced by a lookahead module to incorporate short future context (e.g., 60 ms) to improve quality while maintaining low latency. Our analysis shows this simple-and-effective method significantly surpass alternative causal strategies, including distillation and generative encoder. Extensive experiments show that DyStream could generate video within 34 ms per frame, guaranteeing the entire system latency remains under 100 ms. Besides, it achieves state-of-the-art lip-sync quality, with offline and online LipSync Confidence scores of 8.13 and 7.61 on HDTF, respectively. The model, weights and codes are available.
Similar Papers
StreamingTalker: Audio-driven 3D Facial Animation with Autoregressive Diffusion Model
CV and Pattern Recognition
Makes computer faces talk in real-time.
StreamingTalker: Audio-driven 3D Facial Animation with Autoregressive Diffusion Model
CV and Pattern Recognition
Makes talking faces move in real-time.
Real-Time Streamable Generative Speech Restoration with Flow Matching
Signal Processing
Makes talking sound clearer, instantly.