Score: 1

XBench: A Comprehensive Benchmark for Visual-Language Explanations in Chest Radiography

Published: October 22, 2025 | arXiv ID: 2510.19599v1

By: Haozhe Luo , Shelley Zixin Shu , Ziyu Zhou and more

Potential Business Impact:

Helps doctors trust AI's medical image guesses.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-language models (VLMs) have recently shown remarkable zero-shot performance in medical image understanding, yet their grounding ability, the extent to which textual concepts align with visual evidence, remains underexplored. In the medical domain, however, reliable grounding is essential for interpretability and clinical adoption. In this work, we present the first systematic benchmark for evaluating cross-modal interpretability in chest X-rays across seven CLIP-style VLM variants. We generate visual explanations using cross-attention and similarity-based localization maps, and quantitatively assess their alignment with radiologist-annotated regions across multiple pathologies. Our analysis reveals that: (1) while all VLM variants demonstrate reasonable localization for large and well-defined pathologies, their performance substantially degrades for small or diffuse lesions; (2) models that are pretrained on chest X-ray-specific datasets exhibit improved alignment compared to those trained on general-domain data. (3) The overall recognition ability and grounding ability of the model are strongly correlated. These findings underscore that current VLMs, despite their strong recognition ability, still fall short in clinically reliable grounding, highlighting the need for targeted interpretability benchmarks before deployment in medical practice. XBench code is available at https://github.com/Roypic/Benchmarkingattention

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition