SplatFill: 3D Scene Inpainting via Depth-Guided Gaussian Splatting
By: Mahtab Dahaghin , Milind G. Padalkar , Matteo Toso and more
Potential Business Impact:
Fills in missing parts of 3D scenes perfectly.
3D Gaussian Splatting (3DGS) has enabled the creation of highly realistic 3D scene representations from sets of multi-view images. However, inpainting missing regions, whether due to occlusion or scene editing, remains a challenging task, often leading to blurry details, artifacts, and inconsistent geometry. In this work, we introduce SplatFill, a novel depth-guided approach for 3DGS scene inpainting that achieves state-of-the-art perceptual quality and improved efficiency. Our method combines two key ideas: (1) joint depth-based and object-based supervision to ensure inpainted Gaussians are accurately placed in 3D space and aligned with surrounding geometry, and (2) we propose a consistency-aware refinement scheme that selectively identifies and corrects inconsistent regions without disrupting the rest of the scene. Evaluations on the SPIn-NeRF dataset demonstrate that SplatFill not only surpasses existing NeRF-based and 3DGS-based inpainting methods in visual fidelity but also reduces training time by 24.5%. Qualitative results show our method delivers sharper details, fewer artifacts, and greater coherence across challenging viewpoints.
Similar Papers
Inpaint360GS: Efficient Object-Aware 3D Inpainting via Gaussian Splatting for 360° Scenes
CV and Pattern Recognition
Cleans up messy 360 photos by removing objects.
2D Gaussian Splatting with Semantic Alignment for Image Inpainting
CV and Pattern Recognition
Fills in missing parts of pictures perfectly.
From Volume Rendering to 3D Gaussian Splatting: Theory and Applications
CV and Pattern Recognition
Creates realistic 3D worlds from photos fast.