D$^2$GS: Depth-and-Density Guided Gaussian Splatting for Stable and Accurate Sparse-View Reconstruction
By: Meixi Song , Xin Lin , Dizhe Zhang and more
Potential Business Impact:
Makes 3D pictures look good with few photos.
Recent advances in 3D Gaussian Splatting (3DGS) enable real-time, high-fidelity novel view synthesis (NVS) with explicit 3D representations. However, performance degradation and instability remain significant under sparse-view conditions. In this work, we identify two key failure modes under sparse-view conditions: overfitting in regions with excessive Gaussian density near the camera, and underfitting in distant areas with insufficient Gaussian coverage. To address these challenges, we propose a unified framework D$^2$GS, comprising two key components: a Depth-and-Density Guided Dropout strategy that suppresses overfitting by adaptively masking redundant Gaussians based on density and depth, and a Distance-Aware Fidelity Enhancement module that improves reconstruction quality in under-fitted far-field areas through targeted supervision. Moreover, we introduce a new evaluation metric to quantify the stability of learned Gaussian distributions, providing insights into the robustness of the sparse-view 3DGS. Extensive experiments on multiple datasets demonstrate that our method significantly improves both visual quality and robustness under sparse view conditions. The project page can be found at: https://insta360-research-team.github.io/DDGS-website/.
Similar Papers
Geometry-Consistent 4D Gaussian Splatting for Sparse-Input Dynamic View Synthesis
CV and Pattern Recognition
Creates realistic 3D scenes from few pictures.
UGOD: Uncertainty-Guided Differentiable Opacity and Soft Dropout for Enhanced Sparse-View 3DGS
CV and Pattern Recognition
Makes 3D pictures look better with fewer details.
DIP-GS: Deep Image Prior For Gaussian Splatting Sparse View Recovery
CV and Pattern Recognition
Makes 3D pictures from few photos.