Investigating Spatial Attention Bias in Vision-Language Models
By: Aryan Chaudhary , Sanchit Goyal , Pratik Narang and more
Vision-Language Models have demonstrated remarkable capabilities in understanding visual content, yet systematic biases in their spatial processing remain largely unexplored. This work identifies and characterizes a systematic spatial attention bias where VLMs consistently prioritize describing left-positioned content before right-positioned content in horizontally concatenated images. Through controlled experiments on image pairs using both open-source and closed-source models, we demonstrate that this bias persists across different architectures, with models describing left-positioned content first in approximately 97% of cases under neutral prompting conditions. Testing on an Arabic-finetuned model reveals that the bias persists despite right-to-left language training, ruling out language reading direction as the primary cause. Investigation of training dataset annotation guidelines from PixMo and Visual Genome reveals no explicit left-first ordering instructions, suggesting the bias is consistent with architectural factors rather than explicit training data instructions. These findings reveal fundamental limitations in how current VLMs process spatial information.
Similar Papers
Beyond Semantics: Rediscovering Spatial Awareness in Vision-Language Models
CV and Pattern Recognition
Teaches computers to understand object positions better.
Examining Vision Language Models through Multi-dimensional Experiments with Vision and Text Features
CV and Pattern Recognition
Fixes AI mistakes when looking at pictures.
Vision language models are unreliable at trivial spatial cognition
CV and Pattern Recognition
Computers struggle to tell what's left or right.