Human-AI Complementarity: A Goal for Amplified Oversight
By: Rishub Jain , Sophie Bridgers , Lili Janzer and more
Potential Business Impact:
AI helps people check if AI is telling the truth.
Human feedback is critical for aligning AI systems to human values. As AI capabilities improve and AI is used to tackle more challenging tasks, verifying quality and safety becomes increasingly challenging. This paper explores how we can leverage AI to improve the quality of human oversight. We focus on an important safety problem that is already challenging for humans: fact-verification of AI outputs. We find that combining AI ratings and human ratings based on AI rater confidence is better than relying on either alone. Giving humans an AI fact-verification assistant further improves their accuracy, but the type of assistance matters. Displaying AI explanation, confidence, and labels leads to over-reliance, but just showing search results and evidence fosters more appropriate trust. These results have implications for Amplified Oversight -- the challenge of combining humans and AI to supervise AI systems even as they surpass human expert performance.
Similar Papers
AI and Human Oversight: A Risk-Based Framework for Alignment
Computers and Society
Keeps AI from making bad choices without people.
Scalable Oversight via Partitioned Human Supervision
Machine Learning (CS)
Helps AI learn from experts' "not this" feedback.
Secure Human Oversight of AI: Exploring the Attack Surface of Human Oversight
Cryptography and Security
Secures AI by protecting people watching it.