PointSLAM++: Robust Dense Neural Gaussian Point Cloud-based SLAM
By: Xu Wang , Boyao Han , Xiaojun Chen and more
Potential Business Impact:
Builds accurate 3D maps for robots and games.
Real-time 3D reconstruction is crucial for robotics and augmented reality, yet current simultaneous localization and mapping(SLAM) approaches often struggle to maintain structural consistency and robust pose estimation in the presence of depth noise. This work introduces PointSLAM++, a novel RGB-D SLAM system that leverages a hierarchically constrained neural Gaussian representation to preserve structural relationships while generating Gaussian primitives for scene mapping. It also employs progressive pose optimization to mitigate depth sensor noise, significantly enhancing localization accuracy. Furthermore, it utilizes a dynamic neural representation graph that adjusts the distribution of Gaussian nodes based on local geometric complexity, enabling the map to adapt to intricate scene details in real time. This combination yields high-precision 3D mapping and photorealistic scene rendering. Experimental results show PointSLAM++ outperforms existing 3DGS-based SLAM methods in reconstruction accuracy and rendering quality, demonstrating its advantages for large-scale AR and robotics.
Similar Papers
Gaussian-Plus-SDF SLAM: High-fidelity 3D Reconstruction at 150+ fps
CV and Pattern Recognition
Makes 3D maps of rooms much faster.
SP-SLAM: Neural Real-Time Dense SLAM With Scene Priors
CV and Pattern Recognition
Builds detailed 3D maps of places super fast.
Pseudo Depth Meets Gaussian: A Feed-forward RGB SLAM Baseline
CV and Pattern Recognition
Makes 3D models from videos much faster.