Can AI Explanations Make You Change Your Mind?
By: Laura Spillner , Rachel Ringe , Robert Porzel and more
Potential Business Impact:
Helps people trust AI by showing how it thinks.
In the context of AI-based decision support systems, explanations can help users to judge when to trust the AI's suggestion, and when to question it. In this way, human oversight can prevent AI errors and biased decision-making. However, this rests on the assumption that users will consider explanations in enough detail to be able to catch such errors. We conducted an online study on trust in explainable DSS, and were surprised to find that in many cases, participants spent little time on the explanation and did not always consider it in detail. We present an exploratory analysis of this data, investigating what factors impact how carefully study participants consider AI explanations, and how this in turn impacts whether they are open to changing their mind based on what the AI suggests.
Similar Papers
Beware of "Explanations" of AI
Machine Learning (CS)
Makes AI explanations safer and more trustworthy.
Preliminary Quantitative Study on Explainability and Trust in AI Systems
Artificial Intelligence
Makes AI loan decisions easier to trust.
"Even explanations will not help in trusting [this] fundamentally biased system": A Predictive Policing Case-Study
Human-Computer Interaction
Helps people know when to trust AI.