Score: 0

Enhancing Abnormality Grounding for Vision Language Models with Knowledge Descriptions

Published: March 5, 2025 | arXiv ID: 2503.03278v1

By: Jun Li , Che Liu , Wenjia Bai and more

Potential Business Impact:

Helps doctors find sickness in X-rays better.

Business Areas:
Image Recognition Data and Analytics, Software

Visual Language Models (VLMs) have demonstrated impressive capabilities in visual grounding tasks. However, their effectiveness in the medical domain, particularly for abnormality detection and localization within medical images, remains underexplored. A major challenge is the complex and abstract nature of medical terminology, which makes it difficult to directly associate pathological anomaly terms with their corresponding visual features. In this work, we introduce a novel approach to enhance VLM performance in medical abnormality detection and localization by leveraging decomposed medical knowledge. Instead of directly prompting models to recognize specific abnormalities, we focus on breaking down medical concepts into fundamental attributes and common visual patterns. This strategy promotes a stronger alignment between textual descriptions and visual features, improving both the recognition and localization of abnormalities in medical images.We evaluate our method on the 0.23B Florence-2 base model and demonstrate that it achieves comparable performance in abnormality grounding to significantly larger 7B LLaVA-based medical VLMs, despite being trained on only 1.5% of the data used for such models. Experimental results also demonstrate the effectiveness of our approach in both known and previously unseen abnormalities, suggesting its strong generalization capabilities.

Country of Origin
🇩🇪 Germany

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition