Emergent Extreme-View Geometry in 3D Foundation Models
By: Yiwen Zhang , Joseph Tung , Ruojin Cai and more
Potential Business Impact:
Makes 3D pictures work even with weird camera angles.
3D foundation models (3DFMs) have recently transformed 3D vision, enabling joint prediction of depths, poses, and point maps directly from images. Yet their ability to reason under extreme, non-overlapping views remains largely unexplored. In this work, we study their internal representations and find that 3DFMs exhibit an emergent understanding of extreme-view geometry, despite never being trained for such conditions. To further enhance these capabilities, we introduce a lightweight alignment scheme that refines their internal 3D representation by tuning only a small subset of backbone bias terms, leaving all decoder heads frozen. This targeted adaptation substantially improves relative pose estimation under extreme viewpoints without degrading per-image depth or point quality. Additionally, we contribute MegaUnScene, a new benchmark of Internet scenes unseen by existing 3DFMs, with dedicated test splits for both relative pose estimation and dense 3D reconstruction. All code and data will be released.
Similar Papers
E3D-Bench: A Benchmark for End-to-End 3D Geometric Foundation Models
CV and Pattern Recognition
Helps robots understand 3D space from pictures.
Emergent Outlier View Rejection in Visual Geometry Grounded Transformers
CV and Pattern Recognition
Makes 3D pictures from photos without bad ones.
A Neural Field-Based Approach for View Computation & Data Exploration in 3D Urban Environments
CV and Pattern Recognition
Finds best city views for planning and analysis.