FlashLips: 100-FPS Mask-Free Latent Lip-Sync using Reconstruction Instead of Diffusion or GANs
By: Andreas Zinonos , Michał Stypułkowski , Antoni Bigata and more
We present FlashLips, a two-stage, mask-free lip-sync system that decouples lips control from rendering and achieves real-time performance running at over 100 FPS on a single GPU, while matching the visual quality of larger state-of-the-art models. Stage 1 is a compact, one-step latent-space editor that reconstructs an image using a reference identity, a masked target frame, and a low-dimensional lips-pose vector, trained purely with reconstruction losses - no GANs or diffusion. To remove explicit masks at inference, we use self-supervision: we generate mouth-altered variants of the target image, that serve as pseudo ground truth for fine-tuning, teaching the network to localize edits to the lips while preserving the rest. Stage 2 is an audio-to-pose transformer trained with a flow-matching objective to predict lips-poses vectors from speech. Together, these stages form a simple and stable pipeline that combines deterministic reconstruction with robust audio control, delivering high perceptual quality and faster-than-real-time speed.
Similar Papers
FluentLip: A Phonemes-Based Two-stage Approach for Audio-Driven Lip Synthesis with Optical Flow Consistency
CV and Pattern Recognition
Makes talking videos look and sound real.
FlashPortrait: 6x Faster Infinite Portrait Animation with Adaptive Latent Prediction
CV and Pattern Recognition
Makes animated faces look the same.
Mask-Free Audio-driven Talking Face Generation for Enhanced Visual Quality and Identity Preservation
CV and Pattern Recognition
Makes faces talk realistically from sound.