Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?
By: Lasse Bohlen , Sven Kruschel , Julian Rosenberger and more
Potential Business Impact:
Lets people fix computer guesses, making them happier.
Previous work has shown that allowing users to adjust a machine learning (ML) model's predictions can reduce aversion to imperfect algorithmic decisions. However, these results were obtained in situations where users had no information about the model's reasoning. Thus, it remains unclear whether interpretable ML models could further reduce algorithm aversion or even render adjustability obsolete. In this paper, we conceptually replicate a well-known study that examines the effect of adjustable predictions on algorithm aversion and extend it by introducing an interpretable ML model that visually reveals its decision logic. Through a pre-registered user study with 280 participants, we investigate how transparency interacts with adjustability in reducing aversion to algorithmic decision-making. Our results replicate the adjustability effect, showing that allowing users to modify algorithmic predictions mitigates aversion. Transparency's impact appears smaller than expected and was not significant for our sample. Furthermore, the effects of transparency and adjustability appear to be more independent than expected.
Similar Papers
On the Trade-Off Between Transparency and Security in Adversarial Machine Learning
Machine Learning (CS)
Makes AI safer by hiding its secrets.
Transparent AI: The Case for Interpretability and Explainability
Machine Learning (CS)
Shows how smart computer programs make decisions.
The Impact of Transparency in AI Systems on Users' Data-Sharing Intentions: A Scenario-Based Experiment
Machine Learning (CS)
Trust, not knowing how it works, makes people share data.