Beyond Satisfaction: From Placebic to Actionable Explanations For Enhanced Understandability
By: Joe Shymanski, Jacob Brue, Sandip Sen
Potential Business Impact:
Helps computers show why they make decisions.
Explainable AI (XAI) presents useful tools to facilitate transparency and trustworthiness in machine learning systems. However, current evaluations of system explainability often rely heavily on subjective user surveys, which may not adequately capture the effectiveness of explanations. This paper critiques the overreliance on user satisfaction metrics and explores whether these can differentiate between meaningful (actionable) and vacuous (placebic) explanations. In experiments involving optimal Social Security filing age selection tasks, participants used one of three protocols: no explanations, placebic explanations, and actionable explanations. Participants who received actionable explanations significantly outperformed the other groups in objective measures of their mental model, but users rated placebic and actionable explanations as equally satisfying. This suggests that subjective surveys alone fail to capture whether explanations truly support users in building useful domain understanding. We propose that future evaluations of agent explanation capabilities should integrate objective task performance metrics alongside subjective assessments to more accurately measure explanation quality. The code for this study can be found at https://github.com/Shymkis/social-security-explainer.
Similar Papers
Too Much to Trust? Measuring the Security and Cognitive Impacts of Explainability in AI-Driven SOCs
Cryptography and Security
Helps security experts trust computer threat warnings.
Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities
Human-Computer Interaction
Helps AI explain decisions better to people.
Who Benefits from AI Explanations? Towards Accessible and Interpretable Systems
Artificial Intelligence
Makes AI explanations understandable for blind people.