Taming Video Diffusion Prior with Scene-Grounding Guidance for 3D Gaussian Splatting from Sparse Inputs
By: Yingji Zhong , Zhihao Li , Dave Zhenyu Chen and more
Potential Business Impact:
Makes 3D pictures from few photos.
Despite recent successes in novel view synthesis using 3D Gaussian Splatting (3DGS), modeling scenes with sparse inputs remains a challenge. In this work, we address two critical yet overlooked issues in real-world sparse-input modeling: extrapolation and occlusion. To tackle these issues, we propose to use a reconstruction by generation pipeline that leverages learned priors from video diffusion models to provide plausible interpretations for regions outside the field of view or occluded. However, the generated sequences exhibit inconsistencies that do not fully benefit subsequent 3DGS modeling. To address the challenge of inconsistencies, we introduce a novel scene-grounding guidance based on rendered sequences from an optimized 3DGS, which tames the diffusion model to generate consistent sequences. This guidance is training-free and does not require any fine-tuning of the diffusion model. To facilitate holistic scene modeling, we also propose a trajectory initialization method. It effectively identifies regions that are outside the field of view and occluded. We further design a scheme tailored for 3DGS optimization with generated sequences. Experiments demonstrate that our method significantly improves upon the baseline and achieves state-of-the-art performance on challenging benchmarks.
Similar Papers
Generative Gaussian Splatting: Generating 3D Scenes with Video Diffusion Priors
CV and Pattern Recognition
Creates realistic 3D worlds from flat images.
GSFixer: Improving 3D Gaussian Splatting with Reference-Guided Video Diffusion Priors
CV and Pattern Recognition
Fixes blurry 3D pictures from few photos.
Diffusion-Guided Gaussian Splatting for Large-Scale Unconstrained 3D Reconstruction and Novel View Synthesis
CV and Pattern Recognition
Creates realistic 3D worlds from few pictures.