When Thinking Pays Off: Incentive Alignment for Human-AI Collaboration
By: Joshua Holstein , Patrick Hemmer , Gerhard Satzger and more
Potential Business Impact:
Fixes when people trust computers too much.
Collaboration with artificial intelligence (AI) has improved human decision-making across various domains by leveraging the complementary capabilities of humans and AI. Yet, humans systematically overrely on AI advice, even when their independent judgment would yield superior outcomes, fundamentally undermining the potential of human-AI complementarity. Building on prior work, we identify prevailing incentive structures in human-AI decision-making as a structural driver of this overreliance. To address this misalignment, we propose an alternative incentive mechanism designed to counteract systemic overreliance. We empirically evaluate this approach through a behavioral experiment with 180 participants, finding that the proposed mechanism significantly reduces overreliance. We also show that while appropriately designed incentives can enhance collaboration and decision quality, poorly designed incentives may distort behavior, introduce unintended consequences, and ultimately degrade performance. These findings underscore the importance of aligning incentives with task context and human-AI complementarities, and suggest that effective collaboration requires a shift toward context-sensitive incentive design.
Similar Papers
Fostering human learning is crucial for boosting human-AI synergy
Human-Computer Interaction
Helps people and computers work better together.
Bias in the Loop: How Humans Evaluate AI-Generated Suggestions
Human-Computer Interaction
Helps people work better with computers.
Human-AI Collaboration with Misaligned Preferences
CS and Game Theory
Helps people choose better by making smart mistakes.