Your other Left! Vision-Language Models Fail to Identify Relative Positions in Medical Images
By: Daniel Wolf , Heiko Hillenhagen , Billurvan Taskin and more
Potential Business Impact:
Helps doctors understand where body parts are.
Clinical decision-making relies heavily on understanding relative positions of anatomical structures and anomalies. Therefore, for Vision-Language Models (VLMs) to be applicable in clinical practice, the ability to accurately determine relative positions on medical images is a fundamental prerequisite. Despite its importance, this capability remains highly underexplored. To address this gap, we evaluate the ability of state-of-the-art VLMs, GPT-4o, Llama3.2, Pixtral, and JanusPro, and find that all models fail at this fundamental task. Inspired by successful approaches in computer vision, we investigate whether visual prompts, such as alphanumeric or colored markers placed on anatomical structures, can enhance performance. While these markers provide moderate improvements, results remain significantly lower on medical images compared to observations made on natural images. Our evaluations suggest that, in medical imaging, VLMs rely more on prior anatomical knowledge than on actual image content for answering relative position questions, often leading to incorrect conclusions. To facilitate further research in this area, we introduce the MIRP , Medical Imaging Relative Positioning, benchmark dataset, designed to systematically evaluate the capability to identify relative positions in medical images.
Similar Papers
Your other Left! Vision-Language Models Fail to Identify Relative Positions in Medical Images
CV and Pattern Recognition
Helps doctors understand where body parts are.
Are Large Vision Language Models Truly Grounded in Medical Images? Evidence from Italian Clinical Visual Question Answering
CV and Pattern Recognition
Computers sometimes guess answers without looking.
Vision language models are unreliable at trivial spatial cognition
CV and Pattern Recognition
Computers struggle to tell what's left or right.