Video Editing for Audio-Visual Dubbing
By: Binyamin Manela, Sharon Gannot, Ethan Fetyaya
Potential Business Impact:
Makes dubbed videos look like real people talking.
Visual dubbing, the synchronization of facial movements with new speech, is crucial for making content accessible across different languages, enabling broader global reach. However, current methods face significant limitations. Existing approaches often generate talking faces, hindering seamless integration into original scenes, or employ inpainting techniques that discard vital visual information like partial occlusions and lighting variations. This work introduces EdiDub, a novel framework that reformulates visual dubbing as a content-aware editing task. EdiDub preserves the original video context by utilizing a specialized conditioning scheme to ensure faithful and accurate modifications rather than mere copying. On multiple benchmarks, including a challenging occluded-lip dataset, EdiDub significantly improves identity preservation and synchronization. Human evaluations further confirm its superiority, achieving higher synchronization and visual naturalness scores compared to the leading methods. These results demonstrate that our content-aware editing approach outperforms traditional generation or inpainting, particularly in maintaining complex visual elements while ensuring accurate lip synchronization.
Similar Papers
From Inpainting to Editing: A Self-Bootstrapping Framework for Context-Rich Visual Dubbing
CV and Pattern Recognition
Makes videos match new spoken words perfectly.
StableDub: Taming Diffusion Prior for Generalized and Efficient Visual Dubbing
CV and Pattern Recognition
Makes talking videos match voices perfectly.
Identity-Preserving Video Dubbing Using Motion Warping
CV and Pattern Recognition
Makes dubbed videos look like the original person.