Human-AI Collaboration with Misaligned Preferences
By: Jiaxin Song , Parnian Shahkar , Kate Donahue and more
Potential Business Impact:
Helps people choose better by making smart mistakes.
In many real-life settings, algorithms play the role of assistants, while humans ultimately make the final decision. Often, algorithms specifically act as curators, narrowing down a wide range of options into a smaller subset that the human picks between: consider content recommendation or chatbot responses to questions with multiple valid answers. Crucially, humans may not know their own preferences perfectly either, but instead may only have access to a noisy sampling over preferences. Algorithms can assist humans by curating a smaller subset of items, but must also face the challenge of misalignment: humans may have different preferences from each other (and from the algorithm), and the algorithm may not know the exact preferences of the human they are facing at any point in time. In this paper, we model and theoretically study such a setting. Specifically, we show instances where humans benefit by collaborating with a misaligned algorithm. Surprisingly, we show that humans gain more utility from a misaligned algorithm (which makes different mistakes) than from an aligned algorithm. Next, we build on this result by studying what properties of algorithms maximize human welfare when the goals could be either utilitarian welfare or ensuring all humans benefit. We conclude by discussing implications for designers of algorithmic tools and policymakers.
Similar Papers
Human-AI Collaboration with Misaligned Preferences
CS and Game Theory
Helps people choose better by making smart mistakes.
Human-AI Collaboration: Trade-offs Between Performance and Preferences
Artificial Intelligence
AI learns to work better with people.
Emergent Alignment via Competition
Machine Learning (CS)
Multiple AI can work together to help you.