Score: 0

Emergent Extreme-View Geometry in 3D Foundation Models

Published: November 27, 2025 | arXiv ID: 2511.22686v1

By: Yiwen Zhang , Joseph Tung , Ruojin Cai and more

Potential Business Impact:

Makes 3D pictures work even with weird camera angles.

Business Areas:
Image Recognition Data and Analytics, Software

3D foundation models (3DFMs) have recently transformed 3D vision, enabling joint prediction of depths, poses, and point maps directly from images. Yet their ability to reason under extreme, non-overlapping views remains largely unexplored. In this work, we study their internal representations and find that 3DFMs exhibit an emergent understanding of extreme-view geometry, despite never being trained for such conditions. To further enhance these capabilities, we introduce a lightweight alignment scheme that refines their internal 3D representation by tuning only a small subset of backbone bias terms, leaving all decoder heads frozen. This targeted adaptation substantially improves relative pose estimation under extreme viewpoints without degrading per-image depth or point quality. Additionally, we contribute MegaUnScene, a new benchmark of Internet scenes unseen by existing 3DFMs, with dedicated test splits for both relative pose estimation and dense 3D reconstruction. All code and data will be released.

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition