On the Robustness of Medical Vision-Language Models: Are they Truly Generalizable?
By: Raza Imam, Rufael Marew, Mohammad Yaqub
Potential Business Impact:
Makes AI better at seeing medical pictures with flaws.
Medical Vision-Language Models (MVLMs) have achieved par excellence generalization in medical image analysis, yet their performance under noisy, corrupted conditions remains largely untested. Clinical imaging is inherently susceptible to acquisition artifacts and noise; however, existing evaluations predominantly assess generally clean datasets, overlooking robustness -- i.e., the model's ability to perform under real-world distortions. To address this gap, we first introduce MediMeta-C, a corruption benchmark that systematically applies several perturbations across multiple medical imaging datasets. Combined with MedMNIST-C, this establishes a comprehensive robustness evaluation framework for MVLMs. We further propose RobustMedCLIP, a visual encoder adaptation of a pretrained MVLM that incorporates few-shot tuning to enhance resilience against corruptions. Through extensive experiments, we benchmark 5 major MVLMs across 5 medical imaging modalities, revealing that existing models exhibit severe degradation under corruption and struggle with domain-modality tradeoffs. Our findings highlight the necessity of diverse training and robust adaptation strategies, demonstrating that efficient low-rank adaptation when paired with few-shot tuning, improves robustness while preserving generalization across modalities.
Similar Papers
Evaluating Robustness of Vision-Language Models Under Noisy Conditions
CV and Pattern Recognition
Tests how well AI sees and understands pictures.
Analysing the Robustness of Vision-Language-Models to Common Corruptions
CV and Pattern Recognition
Makes AI understand pictures even when they're messy.
How Far Have Medical Vision-Language Models Come? A Comprehensive Benchmarking Study
CV and Pattern Recognition
Helps computers understand medical pictures better.