Score: 0

BlindSight: Harnessing Sparsity for Efficient VLMs

Published: July 11, 2025 | arXiv ID: 2507.09071v1

By: Tharun Adithya Srikrishnan, Deval Shah, Steven K. Reinhardt

Potential Business Impact:

Makes AI understand pictures and words faster.

Business Areas:
Visual Search Internet Services

Large vision-language models (VLMs) enable the joint processing of text and images. However, the inclusion of vision data significantly expands the prompt length. Along with the quadratic complexity of the attention computation, this results in a longer prefill duration. An approach to mitigate this bottleneck is to leverage the inherent sparsity in the attention computation. In our analysis of attention patterns in VLMs, we observe that a substantial portion of layers exhibit minimal cross-image attention, except through attention-sink tokens per image. These sparse attention patterns fall into distinct categories: sink-only, document mask and a hybrid document-sink mask. Based on this, we propose BlindSight: a training-free approach to optimize VLM inference using a input template-aware attention sparsity mask. We utilize samples from a dataset to derive a prompt-agnostic sparsity categorization for every attention head. We evaluate the proposed technique using VLMs such as Qwen2-VL, Qwen2.5-VL and Gemma-3. BlindSight results in a 32%-41% reduction in FLOPs on average with -2%-+2% accuracy compared to the original model in most evaluated multi-image understanding benchmarks.

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition