Humans incorrectly reject confident accusatory AI judgments
By: Riccardo Loconte , Merylin Monaro , Pietro Pietrini and more
Potential Business Impact:
AI judges lies better than people, but we don't trust it.
Automated verbal deception detection using methods from Artificial Intelligence (AI) has been shown to outperform humans in disentangling lies from truths. Research suggests that transparency and interpretability of computational methods tend to increase human acceptance of using AI to support decisions. However, the extent to which humans accept AI judgments for deception detection remains unclear. We experimentally examined how an AI model's accuracy (i.e., its overall performance in deception detection) and confidence (i.e., the model's uncertainty in single-statements predictions) influence human adoption of the model's judgments. Participants (n=373) were presented with veracity judgments of an AI model with high or low overall accuracy and various degrees of prediction confidence. The results showed that humans followed predictions from a highly accurate model more than from a less accurate one. Interestingly, the more confident the model, the more people deviated from it, especially if the model predicted deception. We also found that human interaction with algorithmic predictions either worsened the machine's performance or was ineffective. While this human aversion to accept highly confident algorithmic predictions was partly explained by participants' tendency to overestimate humans' deception detection abilities, we also discuss how truth-default theory and the social costs of accusing someone of lying help explain the findings.
Similar Papers
Trust in AI emerges from distrust in humans: A machine learning study on decision-making guidance
Human-Computer Interaction
People trust computers more when they don't trust people.
Human vs. Algorithmic Auditors: The Impact of Entity Type and Ambiguity on Human Dishonesty
General Economics
Machines catch more cheating when rules are unclear.
Based AI improves human decision-making but reduces trust
Human-Computer Interaction
Biased AI helps people think better, but they trust it less.