VGGT-SLAM: Dense RGB SLAM Optimized on the SL(4) Manifold
By: Dominic Maggio, Hyungtae Lim, Luca Carlone
Potential Business Impact:
Helps robots map rooms using just one camera.
We present VGGT-SLAM, a dense RGB SLAM system constructed by incrementally and globally aligning submaps created from the feed-forward scene reconstruction approach VGGT using only uncalibrated monocular cameras. While related works align submaps using similarity transforms (i.e., translation, rotation, and scale), we show that such approaches are inadequate in the case of uncalibrated cameras. In particular, we revisit the idea of reconstruction ambiguity, where given a set of uncalibrated cameras with no assumption on the camera motion or scene structure, the scene can only be reconstructed up to a 15-degrees-of-freedom projective transformation of the true geometry. This inspires us to recover a consistent scene reconstruction across submaps by optimizing over the SL(4) manifold, thus estimating 15-degrees-of-freedom homography transforms between sequential submaps while accounting for potential loop closure constraints. As verified by extensive experiments, we demonstrate that VGGT-SLAM achieves improved map quality using long video sequences that are infeasible for VGGT due to its high GPU requirements.
Similar Papers
VGGT-SLAM 2.0: Real time Dense Feed-forward Scene Reconstruction
CV and Pattern Recognition
Helps robots map places better and faster.
GS4: Generalizable Sparse Splatting Semantic SLAM
CV and Pattern Recognition
Builds detailed 3D maps from videos quickly.
WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments
CV and Pattern Recognition
Lets robots see and map moving things.