Score: 1

DyStream: Streaming Dyadic Talking Heads Generation via Flow Matching-based Autoregressive Model

Published: December 30, 2025 | arXiv ID: 2512.24408v1

By: Bohong Chen, Haiyang Liu

Potential Business Impact:

Makes talking videos that feel real and instant.

Business Areas:
Video Streaming Content and Publishing, Media and Entertainment, Video

Generating realistic, dyadic talking head video requires ultra-low latency. Existing chunk-based methods require full non-causal context windows, introducing significant delays. This high latency critically prevents the immediate, non-verbal feedback required for a realistic listener. To address this, we present DyStream, a flow matching-based autoregressive model that could generate video in real-time from both speaker and listener audio. Our method contains two key designs: (1) we adopt a stream-friendly autoregressive framework with flow-matching heads for probabilistic modeling, and (2) We propose a causal encoder enhanced by a lookahead module to incorporate short future context (e.g., 60 ms) to improve quality while maintaining low latency. Our analysis shows this simple-and-effective method significantly surpass alternative causal strategies, including distillation and generative encoder. Extensive experiments show that DyStream could generate video within 34 ms per frame, guaranteeing the entire system latency remains under 100 ms. Besides, it achieves state-of-the-art lip-sync quality, with offline and online LipSync Confidence scores of 8.13 and 7.61 on HDTF, respectively. The model, weights and codes are available.

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition