Biased AI improves human decision-making but reduces trust
By: Shiyang Lai , Junsol Kim , Nadav Kunievsky and more
Potential Business Impact:
Biased AI helps people think better, but they don't trust it.
Current AI systems minimize risk by enforcing ideological neutrality, yet this may introduce automation bias by suppressing cognitive engagement in human decision-making. We conducted randomized trials with 2,500 participants to test whether culturally biased AI enhances human decision-making. Participants interacted with politically diverse GPT-4o variants on information evaluation tasks. Partisan AI assistants enhanced human performance, increased engagement, and reduced evaluative bias compared to non-biased counterparts, with amplified benefits when participants encountered opposing views. These gains carried a trust penalty: participants underappreciated biased AI and overcredited neutral systems. Exposing participants to two AIs whose biases flanked human perspectives closed the perception-performance gap. These findings complicate conventional wisdom about AI neutrality, suggesting that strategic integration of diverse cultural biases may foster improved and resilient human decision-making.
Similar Papers
Biased AI improves human decision-making but reduces trust
Human-Computer Interaction
Biased AI helps people think better, but they don't trust it.
Based AI improves human decision-making but reduces trust
Human-Computer Interaction
Biased AI helps people think better, but they trust it less.
Bias in the Loop: How Humans Evaluate AI-Generated Suggestions
Human-Computer Interaction
Helps people work better with computers.