Beyond Technocratic XAI: The Who, What & How in Explanation Design
By: Ruchira Dhar , Stephanie Brandl , Ninell Oldenburg and more
Potential Business Impact:
Helps people understand how smart computers make decisions.
The field of Explainable AI (XAI) offers a wide range of techniques for making complex models interpretable. Yet, in practice, generating meaningful explanations is a context-dependent task that requires intentional design choices to ensure accessibility and transparency. This paper reframes explanation as a situated design process -- an approach particularly relevant for practitioners involved in building and deploying explainable systems. Drawing on prior research and principles from design thinking, we propose a three-part framework for explanation design in XAI: asking Who needs the explanation, What they need explained, and How that explanation should be delivered. We also emphasize the need for ethical considerations, including risks of epistemic inequality, reinforcing social inequities, and obscuring accountability and governance. By treating explanation as a sociotechnical design process, this framework encourages a context-aware approach to XAI that supports effective communication and the development of ethically responsible explanations.
Similar Papers
From Explainable to Explanatory Artificial Intelligence: Toward a New Paradigm for Human-Centered Explanations through Generative AI
Artificial Intelligence
AI explains decisions like a helpful friend.
Onto-Epistemological Analysis of AI Explanations
Artificial Intelligence
Makes AI decisions understandable and trustworthy.
Who Benefits from AI Explanations? Towards Accessible and Interpretable Systems
Artificial Intelligence
Makes AI explanations understandable for blind people.