Score: 1

AnatomiX, an Anatomy-Aware Grounded Multimodal Large Language Model for Chest X-Ray Interpretation

Published: January 6, 2026 | arXiv ID: 2601.03191v1

By: Anees Ur Rehman Hashmi, Numan Saeed, Christoph Lippert

Potential Business Impact:

Helps doctors understand X-rays by knowing body parts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Multimodal medical large language models have shown impressive progress in chest X-ray interpretation but continue to face challenges in spatial reasoning and anatomical understanding. Although existing grounding techniques improve overall performance, they often fail to establish a true anatomical correspondence, resulting in incorrect anatomical understanding in the medical domain. To address this gap, we introduce AnatomiX, a multitask multimodal large language model explicitly designed for anatomically grounded chest X-ray interpretation. Inspired by the radiological workflow, AnatomiX adopts a two stage approach: first, it identifies anatomical structures and extracts their features, and then leverages a large language model to perform diverse downstream tasks such as phrase grounding, report generation, visual question answering, and image understanding. Extensive experiments across multiple benchmarks demonstrate that AnatomiX achieves superior anatomical reasoning and delivers over 25% improvement in performance on anatomy grounding, phrase grounding, grounded diagnosis and grounded captioning tasks compared to existing approaches. Code and pretrained model are available at https://github.com/aneesurhashmi/anatomix

Country of Origin
🇦🇪 United Arab Emirates

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition