Head Pursuit: Probing Attention Specialization in Multimodal Transformers
By: Lorenzo Basile , Valentino Maiorca , Diego Doimo and more
Potential Business Impact:
Changes AI's words or pictures by fixing tiny parts.
Language and vision-language models have shown impressive performance across a wide range of tasks, but their internal mechanisms remain only partly understood. In this work, we study how individual attention heads in text-generative models specialize in specific semantic or visual attributes. Building on an established interpretability method, we reinterpret the practice of probing intermediate activations with the final decoding layer through the lens of signal processing. This lets us analyze multiple samples in a principled way and rank attention heads based on their relevance to target concepts. Our results show consistent patterns of specialization at the head level across both unimodal and multimodal transformers. Remarkably, we find that editing as few as 1% of the heads, selected using our method, can reliably suppress or enhance targeted concepts in the model output. We validate our approach on language tasks such as question answering and toxicity mitigation, as well as vision-language tasks including image classification and captioning. Our findings highlight an interpretable and controllable structure within attention layers, offering simple tools for understanding and editing large-scale generative models.
Similar Papers
Start Making Sense(s): A Developmental Probe of Attention Specialization Using Lexical Ambiguity
Computation and Language
Helps computers understand word meanings better.
Investigating The Functional Roles of Attention Heads in Vision Language Models: Evidence for Reasoning Modules
Artificial Intelligence
Shows how computers "think" about pictures and words.
Interpreting Attention Heads for Image-to-Text Information Flow in Large Vision-Language Models
CV and Pattern Recognition
Shows how computers "see" and answer questions.