How Far Have Medical Vision-Language Models Come? A Comprehensive Benchmarking Study
By: Che Liu , Jiazhen Pan , Weixiang Shen and more
Potential Business Impact:
Helps computers understand medical pictures better.
Vision-Language Models (VLMs) trained on web-scale corpora excel at natural image tasks and are increasingly repurposed for healthcare; however, their competence in medical tasks remains underexplored. We present a comprehensive evaluation of open-source general-purpose and medically specialised VLMs, ranging from 3B to 72B parameters, across eight benchmarks: MedXpert, OmniMedVQA, PMC-VQA, PathVQA, MMMU, SLAKE, and VQA-RAD. To observe model performance across different aspects, we first separate it into understanding and reasoning components. Three salient findings emerge. First, large general-purpose models already match or surpass medical-specific counterparts on several benchmarks, demonstrating strong zero-shot transfer from natural to medical images. Second, reasoning performance is consistently lower than understanding, highlighting a critical barrier to safe decision support. Third, performance varies widely across benchmarks, reflecting differences in task design, annotation quality, and knowledge demands. No model yet reaches the reliability threshold for clinical deployment, underscoring the need for stronger multimodal alignment and more rigorous, fine-grained evaluation protocols.
Similar Papers
Vision Language Models in Medicine
CV and Pattern Recognition
Helps doctors understand medical images and notes.
Systematic Evaluation of Large Vision-Language Models for Surgical Artificial Intelligence
CV and Pattern Recognition
AI helps doctors understand surgery better.
DrVD-Bench: Do Vision-Language Models Reason Like Human Doctors in Medical Image Diagnosis?
CV and Pattern Recognition
Tests if AI doctors truly understand medical images.