A Controllable 3D Deepfake Generation Framework with Gaussian Splatting
By: Wending Liu , Siyun Liang , Huy H. Nguyen and more
Potential Business Impact:
Makes fake videos look real from any angle.
We propose a novel 3D deepfake generation framework based on 3D Gaussian Splatting that enables realistic, identity-preserving face swapping and reenactment in a fully controllable 3D space. Compared to conventional 2D deepfake approaches that suffer from geometric inconsistencies and limited generalization to novel view, our method combines a parametric head model with dynamic Gaussian representations to support multi-view consistent rendering, precise expression control, and seamless background integration. To address editing challenges in point-based representations, we explicitly separate the head and background Gaussians and use pre-trained 2D guidance to optimize the facial region across views. We further introduce a repair module to enhance visual consistency under extreme poses and expressions. Experiments on NeRSemble and additional evaluation videos demonstrate that our method achieves comparable performance to state-of-the-art 2D approaches in identity preservation, as well as pose and expression consistency, while significantly outperforming them in multi-view rendering quality and 3D consistency. Our approach bridges the gap between 3D modeling and deepfake synthesis, enabling new directions for scene-aware, controllable, and immersive visual forgeries, revealing the threat that emerging 3D Gaussian Splatting technique could be used for manipulation attacks.
Similar Papers
AHA! Animating Human Avatars in Diverse Scenes with Gaussian Splatting
CV and Pattern Recognition
Makes animated people look real in 3D videos.
G4Splat: Geometry-Guided Gaussian Splatting with Generative Prior
CV and Pattern Recognition
Makes 3D pictures from few photos.
GSFix3D: Diffusion-Guided Repair of Novel Views in Gaussian Splatting
CV and Pattern Recognition
Fixes blurry 3D pictures using AI.