Med-GLIP: Advancing Medical Language-Image Pre-training with Large-scale Grounded Dataset
By: Ziye Deng , Ruihan He , Jiaxiang Liu and more
Potential Business Impact:
Helps doctors find sickness in X-rays by words.
Medical image grounding aims to align natural language phrases with specific regions in medical images, serving as a foundational task for intelligent diagnosis, visual question answering (VQA), and automated report generation (MRG). However, existing research is constrained by limited modality coverage, coarse-grained annotations, and the absence of a unified, generalizable grounding framework. To address these challenges, we construct a large-scale medical grounding dataset Med-GLIP-5M comprising over 5.3 million region-level annotations across seven imaging modalities, covering diverse anatomical structures and pathological findings. The dataset supports both segmentation and grounding tasks with hierarchical region labels, ranging from organ-level boundaries to fine-grained lesions. Based on this foundation, we propose Med-GLIP, a modality-aware grounding framework trained on Med-GLIP-5M. Rather than relying on explicitly designed expert modules, Med-GLIP implicitly acquires hierarchical semantic understanding from diverse training data -- enabling it to recognize multi-granularity structures, such as distinguishing lungs from pneumonia lesions. Extensive experiments demonstrate that Med-GLIP consistently outperforms state-of-the-art baselines across multiple grounding benchmarks. Furthermore, integrating its spatial outputs into downstream tasks, including medical VQA and report generation, leads to substantial performance gains. Our dataset will be released soon.
Similar Papers
MedGround: Bridging the Evidence Gap in Medical Vision-Language Models with Verified Grounding Data
CV and Pattern Recognition
Helps doctors understand medical images better.
Boosting Medical Visual Understanding From Multi-Granular Language Learning
CV and Pattern Recognition
Helps doctors understand many medical images better.
GeM-VG: Towards Generalized Multi-image Visual Grounding with Multimodal Large Language Models
CV and Pattern Recognition
Helps computers understand many pictures at once.