Score: 0

A race to belief: How Evidence Accumulation shapes trust in AI and Human informants

Published: November 27, 2025 | arXiv ID: 2511.22617v1

By: Johan Sebastián Galindez-Acosta, Juan José Giraldo-Huertas

Potential Business Impact:

Explains why we trust AI for facts, people for feelings.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

The integration of artificial intelligence into everyday decision-making has reshaped patterns of selective trust, yet the cognitive mechanisms behind context-dependent preferences for AI versus human informants remain unclear. We applied a Bayesian Hierarchical Sequential Sampling Model (HSSM) to analyze how 102 Colombian university students made trust decisions across 30 epistemic (factual) and social (interpersonal) scenarios. Results show that context-dependent trust is primarily driven by differences in drift rate (v), the rate of evidence accumulation, rather than initial bias (z) or response caution (a). Epistemic scenarios produced strong negative drift rates (mean v = -1.26), indicating rapid evidence accumulation favoring AI, whereas social scenarios yielded positive drift rates (mean v = 0.70) favoring humans. Starting points were near neutral (z = 0.52), indicating minimal prior bias. Drift rate showed a strong within-subject association with signed confidence (Fisher-z-averaged r = 0.736; 95 percent bootstrap CI 0.699 to 0.766; 97.8 percent of individual correlations positive, N = 93), suggesting that model-derived evidence accumulation closely mirrors participants' moment-to-moment confidence. These dynamics may help explain the fragility of AI trust: in epistemic domains, rapid but low-vigilance evidence processing may promote uncalibrated reliance on AI that collapses quickly after errors. Interpreted through epistemic vigilance theory, the results indicate that domain-specific vigilance mechanisms modulate evidence accumulation. The findings inform AI governance by highlighting the need for transparency features that sustain vigilance without sacrificing efficiency, offering a mechanistic account of selective trust in human-AI collaboration.

Country of Origin
🇨🇴 Colombia

Page Count
35 pages

Category
Computer Science:
Human-Computer Interaction