GS4: Generalizable Sparse Splatting Semantic SLAM
By: Mingqi Jiang , Chanho Kim , Chen Ziwen and more
Potential Business Impact:
Builds detailed 3D maps from videos quickly.
Traditional SLAM algorithms are excellent at camera tracking but might generate lower resolution and incomplete 3D maps. Recently, Gaussian Splatting (GS) approaches have emerged as an option for SLAM with accurate, dense 3D map building. However, existing GS-based SLAM methods rely on per-scene optimization which is time-consuming and does not generalize to diverse scenes well. In this work, we introduce the first generalizable GS-based semantic SLAM algorithm that incrementally builds and updates a 3D scene representation from an RGB-D video stream using a learned generalizable network. Our approach starts from an RGB-D image recognition backbone to predict the Gaussian parameters from every downsampled and backprojected image location. Additionally, we seamlessly integrate 3D semantic segmentation into our GS framework, bridging 3D mapping and recognition through a shared backbone. To correct localization drifting and floaters, we propose to optimize the GS for only 1 iteration following global localization. We demonstrate state-of-the-art semantic SLAM performance on the real-world benchmark ScanNet with an order of magnitude fewer Gaussians compared to other recent GS-based methods, and showcase our model's generalization capability through zero-shot transfer to the NYUv2 and TUM RGB-D datasets.
Similar Papers
GSsplat: Generalizable Semantic Gaussian Splatting for Novel-view Synthesis in 3D Scenes
Graphics
Makes 3D scenes understandable from many angles.
SplatMAP: Online Dense Monocular SLAM with 3D Gaussian Splatting
CV and Pattern Recognition
Makes 3D models from videos more real.
Large-Scale Gaussian Splatting SLAM
CV and Pattern Recognition
Builds 3D maps of big outdoor places.