BlindSight: Harnessing Sparsity for Efficient VLMs
By: Tharun Adithya Srikrishnan, Deval Shah, Steven K. Reinhardt
Potential Business Impact:
Makes AI understand pictures and words faster.
Large vision-language models (VLMs) enable the joint processing of text and images. However, the inclusion of vision data significantly expands the prompt length. Along with the quadratic complexity of the attention computation, this results in a longer prefill duration. An approach to mitigate this bottleneck is to leverage the inherent sparsity in the attention computation. In our analysis of attention patterns in VLMs, we observe that a substantial portion of layers exhibit minimal cross-image attention, except through attention-sink tokens per image. These sparse attention patterns fall into distinct categories: sink-only, document mask and a hybrid document-sink mask. Based on this, we propose BlindSight: a training-free approach to optimize VLM inference using a input template-aware attention sparsity mask. We utilize samples from a dataset to derive a prompt-agnostic sparsity categorization for every attention head. We evaluate the proposed technique using VLMs such as Qwen2-VL, Qwen2.5-VL and Gemma-3. BlindSight results in a 32%-41% reduction in FLOPs on average with -2%-+2% accuracy compared to the original model in most evaluated multi-image understanding benchmarks.
Similar Papers
Learning Free Token Reduction for Multi-Modal Large Language Models
CV and Pattern Recognition
Makes AI understand videos faster and cheaper.
InfiniteVL: Synergizing Linear and Sparse Attention for Highly-Efficient, Unlimited-Input Vision-Language Models
CV and Pattern Recognition
Lets AI remember long videos and stories.
Eye Gaze Tells You Where to Compute: Gaze-Driven Efficient VLMs
CV and Pattern Recognition
Makes smart glasses understand things faster.