Score: 1

Explainable AI in Healthcare: to Explain, to Predict, or to Describe?

Published: August 7, 2025 | arXiv ID: 2508.05753v1

By: Alex Carriero , Anne de Hond , Bram Cappers and more

Potential Business Impact:

AI can show how it works, but not why.

Explainable Artificial Intelligence (AI) methods are designed to provide information about how AI-based models make predictions. In healthcare, there is a widespread expectation that these methods will provide relevant and accurate information about a model's inner-workings to different stakeholders (ranging from patients and healthcare providers to AI and medical guideline developers). This is a challenging endeavour since what qualifies as relevant information may differ greatly depending on the stakeholder. For many stakeholders, relevant explanations are causal in nature, yet, explainable AI methods are often not able to deliver this information. Using the Describe-Predict-Explain framework, we argue that Explainable AI methods are good descriptive tools, as they may help to describe how a model works but are limited in their ability to explain why a model works in terms of true underlying biological mechanisms and cause-and-effect relations. This limits the suitability of explainable AI methods to provide actionable advice to patients or to judge the face validity of AI-based models.

Repos / Data Links

Page Count
10 pages

Category
Statistics:
Methodology