Complete Gaussian Splats from a Single Image with Denoising Diffusion Models
By: Ziwei Liao , Mohamed Sayed , Steven L. Waslander and more
Potential Business Impact:
Creates full 3D scenes from one picture.
Gaussian splatting typically requires dense observations of the scene and can fail to reconstruct occluded and unobserved areas. We propose a latent diffusion model to reconstruct a complete 3D scene with Gaussian splats, including the occluded parts, from only a single image during inference. Completing the unobserved surfaces of a scene is challenging due to the ambiguity of the plausible surfaces. Conventional methods use a regression-based formulation to predict a single "mode" for occluded and out-of-frustum surfaces, leading to blurriness, implausibility, and failure to capture multiple possible explanations. Thus, they often address this problem partially, focusing either on objects isolated from the background, reconstructing only visible surfaces, or failing to extrapolate far from the input views. In contrast, we propose a generative formulation to learn a distribution of 3D representations of Gaussian splats conditioned on a single input image. To address the lack of ground-truth training data, we propose a Variational AutoReconstructor to learn a latent space only from 2D images in a self-supervised manner, over which a diffusion model is trained. Our method generates faithful reconstructions and diverse samples with the ability to complete the occluded surfaces for high-quality 360-degree renderings.
Similar Papers
GSFix3D: Diffusion-Guided Repair of Novel Views in Gaussian Splatting
CV and Pattern Recognition
Fixes blurry 3D pictures using AI.
Generative Gaussian Splatting: Generating 3D Scenes with Video Diffusion Priors
CV and Pattern Recognition
Creates realistic 3D worlds from flat images.
Diffusion-Guided Gaussian Splatting for Large-Scale Unconstrained 3D Reconstruction and Novel View Synthesis
CV and Pattern Recognition
Creates realistic 3D worlds from few pictures.