AnatomiX, an Anatomy-Aware Grounded Multimodal Large Language Model for Chest X-Ray Interpretation
By: Anees Ur Rehman Hashmi, Numan Saeed, Christoph Lippert
Potential Business Impact:
Helps doctors understand X-rays by knowing body parts.
Multimodal medical large language models have shown impressive progress in chest X-ray interpretation but continue to face challenges in spatial reasoning and anatomical understanding. Although existing grounding techniques improve overall performance, they often fail to establish a true anatomical correspondence, resulting in incorrect anatomical understanding in the medical domain. To address this gap, we introduce AnatomiX, a multitask multimodal large language model explicitly designed for anatomically grounded chest X-ray interpretation. Inspired by the radiological workflow, AnatomiX adopts a two stage approach: first, it identifies anatomical structures and extracts their features, and then leverages a large language model to perform diverse downstream tasks such as phrase grounding, report generation, visual question answering, and image understanding. Extensive experiments across multiple benchmarks demonstrate that AnatomiX achieves superior anatomical reasoning and delivers over 25% improvement in performance on anatomy grounding, phrase grounding, grounded diagnosis and grounded captioning tasks compared to existing approaches. Code and pretrained model are available at https://github.com/aneesurhashmi/anatomix
Similar Papers
Radiology Report Generation with Layer-Wise Anatomical Attention
CV and Pattern Recognition
Helps doctors write X-ray reports faster.
Multi Anatomy X-Ray Foundation Model
CV and Pattern Recognition
AI reads X-rays of any body part.
Knowledge-Augmented Language Models Interpreting Structured Chest X-Ray Findings
CV and Pattern Recognition
Helps doctors understand X-rays better using text.