Blink: Dynamic Visual Token Resolution for Enhanced Multimodal Understanding
By: Yuchen Feng , Zhenyu Zhang , Naibin Gu and more
Potential Business Impact:
Lets computers see details like humans do.
Multimodal large language models (MLLMs) have achieved remarkable progress on various vision-language tasks, yet their visual perception remains limited. Humans, in comparison, perceive complex scenes efficiently by dynamically scanning and focusing on salient regions in a sequential "blink-like" process. Motivated by this strategy, we first investigate whether MLLMs exhibit similar behavior. Our pilot analysis reveals that MLLMs naturally attend to different visual regions across layers and that selectively allocating more computation to salient tokens can enhance visual perception. Building on this insight, we propose Blink, a dynamic visual token resolution framework that emulates the human-inspired process within a single forward pass. Specifically, Blink includes two modules: saliency-guided scanning and dynamic token resolution. It first estimates the saliency of visual tokens in each layer based on the attention map, and extends important tokens through a plug-and-play token super-resolution (TokenSR) module. In the next layer, it drops the extended tokens when they lose focus. This dynamic mechanism balances broad exploration and fine-grained focus, thereby enhancing visual perception adaptively and efficiently. Extensive experiments validate Blink, demonstrating its effectiveness in enhancing visual perception and multimodal understanding.
Similar Papers
MedBLINK: Probing Basic Perception in Multimodal Language Models for Medicine
Artificial Intelligence
Helps doctors trust AI to read medical pictures.
Direct Visual Grounding by Directing Attention of Visual Tokens
CV and Pattern Recognition
Makes AI better at answering questions about pictures.
BLINK-Twice: You see, but do you observe? A Reasoning Benchmark on Visual Perception
CV and Pattern Recognition
Helps computers truly "see" and think about pictures.