Score: 2

When Thinking Pays Off: Incentive Alignment for Human-AI Collaboration

Published: November 12, 2025 | arXiv ID: 2511.09612v1

By: Joshua Holstein , Patrick Hemmer , Gerhard Satzger and more

BigTech Affiliations: IBM

Potential Business Impact:

Fixes when people trust computers too much.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Collaboration with artificial intelligence (AI) has improved human decision-making across various domains by leveraging the complementary capabilities of humans and AI. Yet, humans systematically overrely on AI advice, even when their independent judgment would yield superior outcomes, fundamentally undermining the potential of human-AI complementarity. Building on prior work, we identify prevailing incentive structures in human-AI decision-making as a structural driver of this overreliance. To address this misalignment, we propose an alternative incentive mechanism designed to counteract systemic overreliance. We empirically evaluate this approach through a behavioral experiment with 180 participants, finding that the proposed mechanism significantly reduces overreliance. We also show that while appropriately designed incentives can enhance collaboration and decision quality, poorly designed incentives may distort behavior, introduce unintended consequences, and ultimately degrade performance. These findings underscore the importance of aligning incentives with task context and human-AI complementarities, and suggest that effective collaboration requires a shift toward context-sensitive incentive design.

Country of Origin
πŸ‡©πŸ‡ͺ πŸ‡ΊπŸ‡Έ Germany, United States

Page Count
19 pages

Category
Computer Science:
Human-Computer Interaction