Lifting the Veil on Visual Information Flow in MLLMs: Unlocking Pathways to Faster Inference
By: Hao Yin, Guangzong Si, Zilei Wang
Potential Business Impact:
Makes AI understand pictures faster and cheaper.
Multimodal large language models (MLLMs) improve performance on vision-language tasks by integrating visual features from pre-trained vision encoders into large language models (LLMs). However, how MLLMs process and utilize visual information remains unclear. In this paper, a shift in the dominant flow of visual information is uncovered: (1) in shallow layers, strong interactions are observed between image tokens and instruction tokens, where most visual information is injected into instruction tokens to form cross-modal semantic representations; (2) in deeper layers, image tokens primarily interact with each other, aggregating the remaining visual information to optimize semantic representations within visual modality. Based on these insights, we propose Hierarchical Modality-Aware Pruning (HiMAP), a plug-and-play inference acceleration method that dynamically prunes image tokens at specific layers, reducing computational costs by approximately 65% without sacrificing performance. Our findings offer a new understanding of visual information processing in MLLMs and provide a state-of-the-art solution for efficient inference.
Similar Papers
Rethinking Visual Layer Selection in Multimodal LLMs
CV and Pattern Recognition
Helps computers understand pictures better for different jobs.
How Multimodal LLMs Solve Image Tasks: A Lens on Visual Grounding, Task Reasoning, and Answer Decoding
CV and Pattern Recognition
Shows how AI understands pictures and words.
Perceiving Beyond Language Priors: Enhancing Visual Comprehension and Attention in Multimodal Models
CV and Pattern Recognition
Helps computers truly understand pictures and words together.