MedGround: Bridging the Evidence Gap in Medical Vision-Language Models with Verified Grounding Data
By: Mengmeng Zhang , Xiaoping Wu , Hao Luo and more
Vision-Language Models (VLMs) can generate convincing clinical narratives, yet frequently struggle to visually ground their statements. We posit this limitation arises from the scarcity of high-quality, large-scale clinical referring-localization pairs. To address this, we introduce MedGround, an automated pipeline that transforms segmentation resources into high-quality medical referring grounding data. Leveraging expert masks as spatial anchors, MedGround precisely derives localization targets, extracts shape and spatial cues, and guides VLMs to synthesize natural, clinically grounded queries that reflect morphology and location. To ensure data rigor, a multi-stage verification system integrates strict formatting checks, geometry- and medical-prior rules, and image-based visual judging to filter out ambiguous or visually unsupported samples. Finally, we present MedGround-35K, a novel multimodal medical dataset. Extensive experiments demonstrate that VLMs trained with MedGround-35K consistently achieve improved referring grounding performance, enhance multi-object semantic disambiguation, and exhibit strong generalization to unseen grounding settings. This work highlights MedGround as a scalable, data-driven approach to anchor medical language to verifiable visual evidence. Dataset and code will be released publicly upon acceptance.
Similar Papers
Med-GLIP: Advancing Medical Language-Image Pre-training with Large-scale Grounded Dataset
CV and Pattern Recognition
Helps doctors find sickness in X-rays by words.
SATGround: A Spatially-Aware Approach for Visual Grounding in Remote Sensing
CV and Pattern Recognition
Finds things in satellite pictures using words.
Are Large Vision Language Models Truly Grounded in Medical Images? Evidence from Italian Clinical Visual Question Answering
CV and Pattern Recognition
Computers sometimes guess answers without looking.