Holistic Artificial Intelligence in Medicine; improved performance and explainability
By: Periklis Petridis , Georgios Margaritis , Vasiliki Stoumpou and more
Potential Business Impact:
AI helps doctors understand patient health better.
With the increasing interest in deploying Artificial Intelligence in medicine, we previously introduced HAIM (Holistic AI in Medicine), a framework that fuses multimodal data to solve downstream clinical tasks. However, HAIM uses data in a task-agnostic manner and lacks explainability. To address these limitations, we introduce xHAIM (Explainable HAIM), a novel framework leveraging Generative AI to enhance both prediction and explainability through four structured steps: (1) automatically identifying task-relevant patient data across modalities, (2) generating comprehensive patient summaries, (3) using these summaries for improved predictive modeling, and (4) providing clinical explanations by linking predictions to patient-specific medical knowledge. Evaluated on the HAIM-MIMIC-MM dataset, xHAIM improves average AUC from 79.9% to 90.3% across chest pathology and operative tasks. Importantly, xHAIM transforms AI from a black-box predictor into an explainable decision support system, enabling clinicians to interactively trace predictions back to relevant patient data, bridging AI advancements with clinical utility.
Similar Papers
Towards a Transparent and Interpretable AI Model for Medical Image Classifications
CV and Pattern Recognition
Makes AI doctors explain their choices clearly.
Holistic Explainable AI (H-XAI): Extending Transparency Beyond Developers in AI-Driven Decision Making
Artificial Intelligence
Helps everyone understand how computer decisions are made.
Explainable Artificial Intelligence in Biomedical Image Analysis: A Comprehensive Survey
CV and Pattern Recognition
Helps doctors understand medical pictures better.