Preserving Source Video Realism: High-Fidelity Face Swapping for Cinematic Quality
By: Zekai Luo , Zongze Du , Zhouhang Zhu and more
Video face swapping is crucial in film and entertainment production, where achieving high fidelity and temporal consistency over long and complex video sequences remains a significant challenge. Inspired by recent advances in reference-guided image editing, we explore whether rich visual attributes from source videos can be similarly leveraged to enhance both fidelity and temporal coherence in video face swapping. Building on this insight, this work presents LivingSwap, the first video reference guided face swapping model. Our approach employs keyframes as conditioning signals to inject the target identity, enabling flexible and controllable editing. By combining keyframe conditioning with video reference guidance, the model performs temporal stitching to ensure stable identity preservation and high-fidelity reconstruction across long video sequences. To address the scarcity of data for reference-guided training, we construct a paired face-swapping dataset, Face2Face, and further reverse the data pairs to ensure reliable ground-truth supervision. Extensive experiments demonstrate that our method achieves state-of-the-art results, seamlessly integrating the target identity with the source video's expressions, lighting, and motion, while significantly reducing manual effort in production workflows. Project webpage: https://aim-uofa.github.io/LivingSwap
Similar Papers
MotionSwap
CV and Pattern Recognition
Makes fake faces look real and match perfectly.
DreamSwapV: Mask-guided Subject Swapping for Any Customized Video Editing
CV and Pattern Recognition
Swaps any person in a video with another.
SwapAnyone: Consistent and Realistic Video Synthesis for Swapping Any Person into Any Video
CV and Pattern Recognition
Swaps people's bodies in videos seamlessly.