Difficulties with Evaluating a Deception Detector for AIs
By: Lewis Smith, Bilal Chughtai, Neel Nanda
Potential Business Impact:
Helps tell if AI is lying without watching it.
Building reliable deception detectors for AI systems -- methods that could predict when an AI system is being strategically deceptive without necessarily requiring behavioural evidence -- would be valuable in mitigating risks from advanced AI systems. But evaluating the reliability and efficacy of a proposed deception detector requires examples that we can confidently label as either deceptive or honest. We argue that we currently lack the necessary examples and further identify several concrete obstacles in collecting them. We provide evidence from conceptual arguments, analysis of existing empirical works, and analysis of novel illustrative case studies. We also discuss the potential of several proposed empirical workarounds to these problems and argue that while they seem valuable, they also seem insufficient alone. Progress on deception detection likely requires further consideration of these problems.
Similar Papers
AI Deception: Risks, Dynamics, and Controls
Artificial Intelligence
Teaches AI to be honest and not trick people.
Humans incorrectly reject confident accusatory AI judgments
Human-Computer Interaction
AI judges lies better than people, but we don't trust it.
Caught in the Act: a mechanistic approach to detecting deception
Artificial Intelligence
Finds when AI lies to you.