Enhancing Medical Large Vision-Language Models via Alignment Distillation
By: Aofei Chang, Ting Wang, Fenglong Ma
Medical Large Vision-Language Models (Med-LVLMs) have shown promising results in clinical applications, but often suffer from hallucinated outputs due to misaligned visual understanding. In this work, we identify two fundamental limitations contributing to this issue: insufficient visual representation learning and poor visual attention alignment. To address these problems, we propose MEDALIGN, a simple, lightweight alignment distillation framework that transfers visual alignment knowledge from a domain-specific Contrastive Language-Image Pre-training (CLIP) model to Med-LVLMs. MEDALIGN introduces two distillation losses: a spatial-aware visual alignment loss based on visual token-level similarity structures, and an attention-aware distillation loss that guides attention toward diagnostically relevant regions. Extensive experiments on medical report generation and medical visual question answering (VQA) benchmarks show that MEDALIGN consistently improves both performance and interpretability, yielding more visually grounded outputs.
Similar Papers
Boosting Medical Vision-Language Pretraining via Momentum Self-Distillation under Limited Computing Resources
CV and Pattern Recognition
Helps computers understand medical images better, faster.
MedAlign: A Synergistic Framework of Multimodal Preference Optimization and Federated Meta-Cognitive Reasoning
Artificial Intelligence
Helps doctors understand medical images better.
Data-Efficient Fine-Tuning of Vision-Language Models for Diagnosis of Alzheimer's Disease
CV and Pattern Recognition
Helps doctors find Alzheimer's using brain scans.