Explainable AI in Healthcare: to Explain, to Predict, or to Describe?
By: Alex Carriero , Anne de Hond , Bram Cappers and more
Potential Business Impact:
AI can show how it works, but not why.
Explainable Artificial Intelligence (AI) methods are designed to provide information about how AI-based models make predictions. In healthcare, there is a widespread expectation that these methods will provide relevant and accurate information about a model's inner-workings to different stakeholders (ranging from patients and healthcare providers to AI and medical guideline developers). This is a challenging endeavour since what qualifies as relevant information may differ greatly depending on the stakeholder. For many stakeholders, relevant explanations are causal in nature, yet, explainable AI methods are often not able to deliver this information. Using the Describe-Predict-Explain framework, we argue that Explainable AI methods are good descriptive tools, as they may help to describe how a model works but are limited in their ability to explain why a model works in terms of true underlying biological mechanisms and cause-and-effect relations. This limits the suitability of explainable AI methods to provide actionable advice to patients or to judge the face validity of AI-based models.
Similar Papers
Onto-Epistemological Analysis of AI Explanations
Artificial Intelligence
Makes AI decisions understandable and trustworthy.
Beware of "Explanations" of AI
Machine Learning (CS)
Makes AI explanations safer and more trustworthy.
From Explainable to Explanatory Artificial Intelligence: Toward a New Paradigm for Human-Centered Explanations through Generative AI
Artificial Intelligence
AI explains decisions like a helpful friend.