SyncAnyone: Implicit Disentanglement via Progressive Self-Correction for Lip-Syncing in the wild
By: Xindi Zhang , Dechao Meng , Steven Xiao and more
Potential Business Impact:
Makes videos speak any language perfectly.
High-quality AI-powered video dubbing demands precise audio-lip synchronization, high-fidelity visual generation, and faithful preservation of identity and background. Most existing methods rely on a mask-based training strategy, where the mouth region is masked in talking-head videos, and the model learns to synthesize lip movements from corrupted inputs and target audios. While this facilitates lip-sync accuracy, it disrupts spatiotemporal context, impairing performance on dynamic facial motions and causing instability in facial structure and background consistency. To overcome this limitation, we propose SyncAnyone, a novel two-stage learning framework that achieves accurate motion modeling and high visual fidelity simultaneously. In Stage 1, we train a diffusion-based video transformer for masked mouth inpainting, leveraging its strong spatiotemporal modeling to generate accurate, audio-driven lip movements. However, due to input corruption, minor artifacts may arise in the surrounding facial regions and the background. In Stage 2, we develop a mask-free tuning pipeline to address mask-induced artifacts. Specifically, on the basis of the Stage 1 model, we develop a data generation pipeline that creates pseudo-paired training samples by synthesizing lip-synced videos from the source video and random sampled audio. We further tune the stage 2 model on this synthetic data, achieving precise lip editing and better background consistency. Extensive experiments show that our method achieves state-of-the-art results in visual quality, temporal coherence, and identity preservation under in-the wild lip-syncing scenarios.
Similar Papers
OmniSync: Towards Universal Lip Synchronization via Diffusion Transformers
CV and Pattern Recognition
Makes talking videos match the sound perfectly.
SayAnything: Audio-Driven Lip Synchronization with Conditional Video Diffusion
CV and Pattern Recognition
Makes any person's mouth move to any sound.
KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution
CV and Pattern Recognition
Makes videos match new voices perfectly.