Building temporally coherent 3D maps with VGGT for memory-efficient Semantic SLAM
By: Gergely Dinya , Péter Halász , András Lőrincz and more
Potential Business Impact:
Helps robots see and understand moving things.
We present a fast, spatio-temporal scene understanding framework based on Vision Gated Generative Transformers (VGGT). The proposed pipeline is designed to enable efficient, close to real-time performance, supporting applications including assistive navigation. To achieve continuous updates of the 3D scene representation, we process the image flow with a sliding window, aligning submaps, thereby overcoming VGGT's high memory demands. We exploit the VGGT tracking head to aggregate 2D semantic instance masks into 3D objects. To allow for temporal consistency and richer contextual reasoning the system stores timestamps and instance-level identities, thereby enabling the detection of changes in the environment. We evaluate the approach on well-known benchmarks and custom datasets specifically designed for assistive navigation scenarios. The results demonstrate the applicability of the framework to real-world scenarios.
Similar Papers
SwiftVGGT: A Scalable Visual Geometry Grounded Transformer for Large-Scale Scenes
CV and Pattern Recognition
Builds detailed 3D maps much faster.
DriveVGGT: Visual Geometry Transformer for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see better in 3D.
LiteVGGT: Boosting Vanilla VGGT via Geometry-aware Cached Token Merging
CV and Pattern Recognition
Makes 3D pictures from many photos faster.