Score: 1

Investigating The Functional Roles of Attention Heads in Vision Language Models: Evidence for Reasoning Modules

Published: December 11, 2025 | arXiv ID: 2512.10300v1

By: Yanbei Jiang , Xueqi Ma , Shu Liu and more

Potential Business Impact:

Shows how computers "think" about pictures and words.

Business Areas:
Computer Vision Hardware, Software

Despite excelling on multimodal benchmarks, vision-language models (VLMs) largely remain a black box. In this paper, we propose a novel interpretability framework to systematically analyze the internal mechanisms of VLMs, focusing on the functional roles of attention heads in multimodal reasoning. To this end, we introduce CogVision, a dataset that decomposes complex multimodal questions into step-by-step subquestions designed to simulate human reasoning through a chain-of-thought paradigm, with each subquestion associated with specific receptive or cognitive functions such as high-level visual reception and inference. Using a probing-based methodology, we identify attention heads that specialize in these functions and characterize them as functional heads. Our analysis across diverse VLM families reveals that these functional heads are universally sparse, vary in number and distribution across functions, and mediate interactions and hierarchical organization. Furthermore, intervention experiments demonstrate their critical role in multimodal reasoning: removing functional heads leads to performance degradation, while emphasizing them enhances accuracy. These findings provide new insights into the cognitive organization of VLMs and suggest promising directions for designing models with more human-aligned perceptual and reasoning abilities.

Country of Origin
🇦🇺 Australia

Repos / Data Links

Page Count
29 pages

Category
Computer Science:
Artificial Intelligence