OmniSync: Towards Universal Lip Synchronization via Diffusion Transformers
By: Ziqiao Peng , Jiwen Liu , Haoxian Zhang and more
Potential Business Impact:
Makes talking videos match the sound perfectly.
Lip synchronization is the task of aligning a speaker's lip movements in video with corresponding speech audio, and it is essential for creating realistic, expressive video content. However, existing methods often rely on reference frames and masked-frame inpainting, which limit their robustness to identity consistency, pose variations, facial occlusions, and stylized content. In addition, since audio signals provide weaker conditioning than visual cues, lip shape leakage from the original video will affect lip sync quality. In this paper, we present OmniSync, a universal lip synchronization framework for diverse visual scenarios. Our approach introduces a mask-free training paradigm using Diffusion Transformer models for direct frame editing without explicit masks, enabling unlimited-duration inference while maintaining natural facial dynamics and preserving character identity. During inference, we propose a flow-matching-based progressive noise initialization to ensure pose and identity consistency, while allowing precise mouth-region editing. To address the weak conditioning signal of audio, we develop a Dynamic Spatiotemporal Classifier-Free Guidance (DS-CFG) mechanism that adaptively adjusts guidance strength over time and space. We also establish the AIGC-LipSync Benchmark, the first evaluation suite for lip synchronization in diverse AI-generated videos. Extensive experiments demonstrate that OmniSync significantly outperforms prior methods in both visual quality and lip sync accuracy, achieving superior results in both real-world and AI-generated videos.
Similar Papers
SyncAnyone: Implicit Disentanglement via Progressive Self-Correction for Lip-Syncing in the wild
CV and Pattern Recognition
Makes videos speak any language perfectly.
OmniInsert: Mask-Free Video Insertion of Any Reference via Diffusion Transformer Models
CV and Pattern Recognition
Puts new people into videos perfectly.
Detecting Lip-Syncing Deepfakes: Vision Temporal Transformer for Analyzing Mouth Inconsistencies
CV and Pattern Recognition
Finds fake videos where mouths don't match sound.