Human Cognitive Biases in Explanation-Based Interaction: The Case of Within and Between Session Order Effect
By: Dario Pesenti , Alessandro Bogani , Katya Tentori and more
Potential Business Impact:
Fixes AI mistakes by showing how it thinks.
Explanatory Interactive Learning (XIL) is a powerful interactive learning framework designed to enable users to customize and correct AI models by interacting with their explanations. In a nutshell, XIL algorithms select a number of items on which an AI model made a decision (e.g. images and their tags) and present them to users, together with corresponding explanations (e.g. image regions that drive the model's decision). Then, users supply corrective feedback for the explanations, which the algorithm uses to improve the model. Despite showing promise in debugging tasks, recent studies have raised concerns that explanatory interaction may trigger order effects, a well-known cognitive bias in which the sequence of presented items influences users' trust and, critically, the quality of their feedback. We argue that these studies are not entirely conclusive, as the experimental designs and tasks employed differ substantially from common XIL use cases, complicating interpretation. To clarify the interplay between order effects and explanatory interaction, we ran two larger-scale user studies (n = 713 total) designed to mimic common XIL tasks. Specifically, we assessed order effects both within and between debugging sessions by manipulating the order in which correct and wrong explanations are presented to participants. Order effects had a limited, through significant impact on users' agreement with the model (i.e., a behavioral measure of their trust), and only when examined withing debugging sessions, not between them. The quality of users' feedback was generally satisfactory, with order effects exerting only a small and inconsistent influence in both experiments. Overall, our findings suggest that order effects do not pose a significant issue for the successful employment of XIL approaches. More broadly, our work contributes to the ongoing efforts for understanding human factors in AI.
Similar Papers
The Effect of Explainable AI-based Decision Support on Human Task Performance: A Meta-Analysis
Human-Computer Interaction
Makes AI help people make better choices.
Can AI Explanations Make You Change Your Mind?
Human-Computer Interaction
Helps people trust AI by showing how it thinks.
Human-AI collaboration or obedient and often clueless AI in instruct, serve, repeat dynamics?
Human-Computer Interaction
AI teaches students, but doesn't truly learn with them.