Score: 0

Beyond Semantics: Rediscovering Spatial Awareness in Vision-Language Models

Published: March 21, 2025 | arXiv ID: 2503.17349v1

By: Jianing Qi , Jiawei Liu , Hao Tang and more

Potential Business Impact:

Teaches computers to understand object positions better.

Business Areas:
Semantic Search Internet Services

Vision-Language Models (VLMs) excel at identifying and describing objects but struggle with spatial reasoning such as accurately understanding the relative positions of objects. Inspired by the dual-pathway (ventral-dorsal) model of human vision, we investigate why VLMs fail spatial tasks despite strong object recognition capabilities. Our interpretability-driven analysis reveals a critical underlying cause: vision embeddings in VLMs are treated primarily as semantic ``bag-of-tokens," overshadowing subtle yet crucial positional cues due to their disproportionately large embedding norms. We validate this insight through extensive diagnostic experiments, demonstrating minimal performance impact when token orders or fine-grained spatial details are removed. Guided by these findings, we propose simple, interpretable interventions, including normalizing vision embedding norms and extracting mid-layer spatially rich features, to restore spatial awareness. Empirical results on both our synthetic data and standard benchmarks demonstrate improved spatial reasoning capabilities, highlighting the value of interpretability-informed design choices. Our study not only uncovers fundamental limitations in current VLM architectures but also provides actionable insights for enhancing structured perception of visual scenes.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition