Score: 2

Evaluating Foundation Models' 3D Understanding Through Multi-View Correspondence Analysis

Published: December 12, 2025 | arXiv ID: 2512.11574v1

By: Valentina Lilova , Toyesh Chakravorty , Julian I. Bibo and more

Potential Business Impact:

Tests how well computers understand 3D objects from pictures.

Business Areas:
Image Recognition Data and Analytics, Software

Benchmarking 3D spatial understanding of foundation models is essential for real-world applications such as robotics and autonomous driving. Existing evaluations often rely on downstream finetuning with linear heads or task-specific decoders, making it difficult to isolate the intrinsic 3D reasoning ability of pretrained encoders. In this work, we introduce a novel benchmark for in-context 3D scene understanding that requires no finetuning and directly probes the quality of dense visual features. Building on the Hummingbird framework, which evaluates in-context 2D scene understanding, we extend the setup to the 3D Multi-View ImageNet (MVImgNet) dataset. Given a set of images from objects in specific angles (keys), we benchmark the performance of segmenting novel views (queries) and report the scores in 4 categories of easy, medium, hard, and extreme based on the key-query view contrast. We benchmark 8 state-of-the-art foundation models and show DINO-based encoders remain competitive across large viewpoint shifts, while 3D-aware models like VGGT require dedicated multi-view adjustments. Our code is publicly available at https://github.com/ToyeshC/open-hummingbird-3d-eval .

Country of Origin
🇳🇱 Netherlands

Repos / Data Links

Page Count
27 pages

Category
Computer Science:
CV and Pattern Recognition