STARCaster: Spatio-Temporal AutoRegressive Video Diffusion for Identity- and View-Aware Talking Portraits
By: Foivos Paraperas Papantoniou , Stathis Galanakis , Rolandos Alexandros Potamias and more
Potential Business Impact:
Makes talking videos from a picture and voice.
This paper presents STARCaster, an identity-aware spatio-temporal video diffusion model that addresses both speech-driven portrait animation and free-viewpoint talking portrait synthesis, given an identity embedding or reference image, within a unified framework. Existing 2D speech-to-video diffusion models depend heavily on reference guidance, leading to limited motion diversity. At the same time, 3D-aware animation typically relies on inversion through pre-trained tri-plane generators, which often leads to imperfect reconstructions and identity drift. We rethink reference- and geometry-based paradigms in two ways. First, we deviate from strict reference conditioning at pre-training by introducing softer identity constraints. Second, we address 3D awareness implicitly within the 2D video domain by leveraging the inherent multi-view nature of video data. STARCaster adopts a compositional approach progressing from ID-aware motion modeling, to audio-visual synchronization via lip reading-based supervision, and finally to novel view animation through temporal-to-spatial adaptation. To overcome the scarcity of 4D audio-visual data, we propose a decoupled learning approach in which view consistency and temporal coherence are trained independently. A self-forcing training scheme enables the model to learn from longer temporal contexts than those generated at inference, mitigating the overly static animations common in existing autoregressive approaches. Comprehensive evaluations demonstrate that STARCaster generalizes effectively across tasks and identities, consistently surpassing prior approaches in different benchmarks.
Similar Papers
StreamingTalker: Audio-driven 3D Facial Animation with Autoregressive Diffusion Model
CV and Pattern Recognition
Makes talking faces move in real-time.
StreamingTalker: Audio-driven 3D Facial Animation with Autoregressive Diffusion Model
CV and Pattern Recognition
Makes computer faces talk in real-time.
InfinityStar: Unified Spacetime AutoRegressive Modeling for Visual Generation
CV and Pattern Recognition
Creates realistic videos from text, faster than before.