Friend or Foe: Delegating to an AI Whose Alignment is Unknown
By: Drew Fudenberg, Annie Liang
Potential Business Impact:
Helps doctors trust AI for patient treatment choices.
AI systems have the potential to improve decision-making, but decision makers face the risk that the AI may be misaligned with their objectives. We study this problem in the context of a treatment decision, where a designer decides which patient attributes to reveal to an AI before receiving a prediction of the patient's need for treatment. Providing the AI with more information increases the benefits of an aligned AI but also amplifies the harm from a misaligned one. We characterize how the designer should select attributes to balance these competing forces, depending on their beliefs about the AI's reliability. We show that the designer should optimally disclose attributes that identify \emph{rare} segments of the population in which the need for treatment is high, and pool the remaining patients.
Similar Papers
A Decision-Theoretic Approach for Managing Misalignment
Artificial Intelligence
Lets AI make decisions when it's good enough.
Human-AI Collaboration with Misaligned Preferences
CS and Game Theory
Helps people choose better by making smart mistakes.
Human-AI Collaboration with Misaligned Preferences
CS and Game Theory
Helps people choose better by making smart mistakes.