On Geometric Understanding and Learned Data Priors in VGGT
By: Jelena Bratulić , Sudhanshu Mittal , Thomas Brox and more
The Visual Geometry Grounded Transformer (VGGT) is a 3D foundation model that infers camera geometry and scene structure in a single feed-forward pass. Trained in a supervised, single-step fashion on large datasets, VGGT raises a key question: does it build upon geometric concepts like traditional multi-view methods, or does it rely primarily on learned appearance-based data-driven priors? In this work, we conduct a systematic analysis of VGGT's internal mechanisms to uncover whether geometric understanding emerges within its representations. By probing intermediate features, analyzing attention patterns, and performing interventions, we examine how the model implements its functionality. Our findings reveal that VGGT implicitly performs correspondence matching within its global attention layers and encodes epipolar geometry, despite being trained without explicit geometric constraints. We further investigate VGGT's dependence on its learned data priors. Using spatial input masking and perturbation experiments, we assess its robustness to occlusions, appearance variations, and camera configurations, comparing it with classical multi-stage pipelines. Together, these insights highlight how VGGT internalizes geometric structure while using learned data-driven priors.
Similar Papers
VGGT: Visual Geometry Grounded Transformer
CV and Pattern Recognition
Creates 3D worlds from pictures in seconds.
IGGT: Instance-Grounded Geometry Transformer for Semantic 3D Reconstruction
CV and Pattern Recognition
Helps computers understand 3D objects from pictures.
DriveVGGT: Visual Geometry Transformer for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see better in 3D.