Score: 0

3D-Aware Vision-Language Models Fine-Tuning with Geometric Distillation

Published: June 11, 2025 | arXiv ID: 2506.09883v1

By: Seonho Lee , Jiho Choi , Inha Kang and more

Potential Business Impact:

Teaches computers to understand 3D space better.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-Language Models (VLMs) have shown remarkable performance on diverse visual and linguistic tasks, yet they remain fundamentally limited in their understanding of 3D spatial structures. We propose Geometric Distillation, a lightweight, annotation-free fine-tuning framework that injects human-inspired geometric cues into pretrained VLMs without modifying their architecture. By distilling (1) sparse correspondences, (2) relative depth relations, and (3) dense cost volumes from off-the-shelf 3D foundation models (e.g., MASt3R, VGGT), our method shapes representations to be geometry-aware while remaining compatible with natural image-text inputs. Through extensive evaluations on 3D vision-language reasoning and 3D perception benchmarks, our method consistently outperforms prior approaches, achieving improved 3D spatial reasoning with significantly lower computational cost. Our work demonstrates a scalable and efficient path to bridge 2D-trained VLMs with 3D understanding, opening up wider use in spatially grounded multimodal tasks.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
19 pages

Category
Computer Science:
CV and Pattern Recognition