A race to belief: How Evidence Accumulation shapes trust in AI and Human informants
By: Johan Sebastián Galindez-Acosta, Juan José Giraldo-Huertas
Potential Business Impact:
Explains why we trust AI for facts, people for feelings.
The integration of artificial intelligence into everyday decision-making has reshaped patterns of selective trust, yet the cognitive mechanisms behind context-dependent preferences for AI versus human informants remain unclear. We applied a Bayesian Hierarchical Sequential Sampling Model (HSSM) to analyze how 102 Colombian university students made trust decisions across 30 epistemic (factual) and social (interpersonal) scenarios. Results show that context-dependent trust is primarily driven by differences in drift rate (v), the rate of evidence accumulation, rather than initial bias (z) or response caution (a). Epistemic scenarios produced strong negative drift rates (mean v = -1.26), indicating rapid evidence accumulation favoring AI, whereas social scenarios yielded positive drift rates (mean v = 0.70) favoring humans. Starting points were near neutral (z = 0.52), indicating minimal prior bias. Drift rate showed a strong within-subject association with signed confidence (Fisher-z-averaged r = 0.736; 95 percent bootstrap CI 0.699 to 0.766; 97.8 percent of individual correlations positive, N = 93), suggesting that model-derived evidence accumulation closely mirrors participants' moment-to-moment confidence. These dynamics may help explain the fragility of AI trust: in epistemic domains, rapid but low-vigilance evidence processing may promote uncalibrated reliance on AI that collapses quickly after errors. Interpreted through epistemic vigilance theory, the results indicate that domain-specific vigilance mechanisms modulate evidence accumulation. The findings inform AI governance by highlighting the need for transparency features that sustain vigilance without sacrificing efficiency, offering a mechanistic account of selective trust in human-AI collaboration.
Similar Papers
Trust in AI emerges from distrust in humans: A machine learning study on decision-making guidance
Human-Computer Interaction
People trust computers more when they don't trust people.
On the Influence of Artificial Intelligence on Human Problem-Solving: Empirical Insights for the Third Wave in a Multinational Longitudinal Pilot Study
Computers and Society
Helps people check AI answers better.
How AI Responses Shape User Beliefs: The Effects of Information Detail and Confidence on Belief Strength and Stance
Human-Computer Interaction
AI's detailed, confident answers change minds more.