Where Does Vision Meet Language? Understanding and Refining Visual Fusion in MLLMs via Contrastive Attention
By: Shezheng Song, Shasha Li, Jie Yu
Multimodal Large Language Models (MLLMs) have achieved remarkable progress in vision-language understanding, yet how they internally integrate visual and textual information remains poorly understood. To bridge this gap, we perform a systematic layer-wise masking analysis across multiple architectures, revealing how visual-text fusion evolves within MLLMs. The results show that fusion emerges at several specific layers rather than being uniformly distributed across the network, and certain models exhibit a late-stage "review" phenomenon where visual signals are reactivated before output generation. Besides, we further analyze layer-wise attention evolution and observe persistent high-attention noise on irrelevant regions, along with gradually increasing attention on text-aligned areas. Guided by these insights, we introduce a training-free contrastive attention framework that models the transformation between early fusion and final layers to highlight meaningful attention shifts. Extensive experiments across various MLLMs and benchmarks validate our analysis and demonstrate that the proposed approach improves multimodal reasoning performance. Code will be released.
Similar Papers
Rethinking Visual Information Processing in Multimodal LLMs
CV and Pattern Recognition
Lets computers understand pictures and words together better.
Perceiving Beyond Language Priors: Enhancing Visual Comprehension and Attention in Multimodal Models
CV and Pattern Recognition
Helps computers truly understand pictures and words together.
How Multimodal LLMs Solve Image Tasks: A Lens on Visual Grounding, Task Reasoning, and Answer Decoding
CV and Pattern Recognition
Shows how AI understands pictures and words.