Score: 0

How Far Have Medical Vision-Language Models Come? A Comprehensive Benchmarking Study

Published: July 15, 2025 | arXiv ID: 2507.11200v2

By: Che Liu , Jiazhen Pan , Weixiang Shen and more

Potential Business Impact:

Helps computers understand medical pictures better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Vision-Language Models (VLMs) trained on web-scale corpora excel at natural image tasks and are increasingly repurposed for healthcare; however, their competence in medical tasks remains underexplored. We present a comprehensive evaluation of open-source general-purpose and medically specialised VLMs, ranging from 3B to 72B parameters, across eight benchmarks: MedXpert, OmniMedVQA, PMC-VQA, PathVQA, MMMU, SLAKE, and VQA-RAD. To observe model performance across different aspects, we first separate it into understanding and reasoning components. Three salient findings emerge. First, large general-purpose models already match or surpass medical-specific counterparts on several benchmarks, demonstrating strong zero-shot transfer from natural to medical images. Second, reasoning performance is consistently lower than understanding, highlighting a critical barrier to safe decision support. Third, performance varies widely across benchmarks, reflecting differences in task design, annotation quality, and knowledge demands. No model yet reaches the reliability threshold for clinical deployment, underscoring the need for stronger multimodal alignment and more rigorous, fine-grained evaluation protocols.

Country of Origin
🇬🇧 United Kingdom

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition