Segmentation-Guided Neural Radiance Fields for Novel Street View Synthesis
By: Yizhou Li , Yusuke Monno , Masatoshi Okutomi and more
Potential Business Impact:
Creates realistic 3D views of outdoor scenes.
Recent advances in Neural Radiance Fields (NeRF) have shown great potential in 3D reconstruction and novel view synthesis, particularly for indoor and small-scale scenes. However, extending NeRF to large-scale outdoor environments presents challenges such as transient objects, sparse cameras and textures, and varying lighting conditions. In this paper, we propose a segmentation-guided enhancement to NeRF for outdoor street scenes, focusing on complex urban environments. Our approach extends ZipNeRF and utilizes Grounded SAM for segmentation mask generation, enabling effective handling of transient objects, modeling of the sky, and regularization of the ground. We also introduce appearance embeddings to adapt to inconsistent lighting across view sequences. Experimental results demonstrate that our method outperforms the baseline ZipNeRF, improving novel view synthesis quality with fewer artifacts and sharper details.
Similar Papers
Improving Geometric Consistency for 360-Degree Neural Radiance Fields in Indoor Scenarios
CV and Pattern Recognition
Makes computer pictures of rooms look more real.
VDNeRF: Vision-only Dynamic Neural Radiance Field for Urban Scenes
CV and Pattern Recognition
Makes robots see moving things and know where they are.
Empowering Sparse-Input Neural Radiance Fields with Dual-Level Semantic Guidance from Dense Novel Views
CV and Pattern Recognition
Makes 3D pictures from few photos.