Caught in the Act: a mechanistic approach to detecting deception
By: Gerard Boxo , Ryan Socha , Daniel Yoo and more
Potential Business Impact:
Finds when AI lies about facts.
Sophisticated instrumentation for AI systems might have indicators that signal misalignment from human values, not unlike a "check engine" light in cars. One such indicator of misalignment is deceptiveness in generated responses. Future AI instrumentation may have the ability to detect when an LLM generates deceptive responses while reasoning about seemingly plausible but incorrect answers to factual questions. In this work, we demonstrate that linear probes on LLMs internal activations can detect deception in their responses with extremely high accuracy. Our probes reach a maximum of greater than 90% accuracy in distinguishing between deceptive and non-deceptive arguments generated by llama and qwen models ranging from 1.5B to 14B parameters, including their DeepSeek-r1 finetuned variants. We observe that probes on smaller models (1.5B) achieve chance accuracy at detecting deception, while larger models (greater than 7B) reach 70-80%, with their reasoning counterparts exceeding 90%. The layer-wise probe accuracy follows a three-stage pattern across layers: near-random (50%) in early layers, peaking in middle layers, and slightly declining in later layers. Furthermore, using an iterative null space projection approach, we find multitudes of linear directions that encode deception, ranging from 20 in Qwen 3B to nearly 100 in DeepSeek 7B and Qwen 14B models.
Similar Papers
Caught in the Act: a mechanistic approach to detecting deception
Artificial Intelligence
Finds when AI lies to you.
Beyond Prompt-Induced Lies: Investigating LLM Deception on Benign Prompts
Machine Learning (CS)
Finds when AI lies about hard problems.
When Thinking LLMs Lie: Unveiling the Strategic Deception in Representations of Reasoning Models
Artificial Intelligence
Teaches AI to tell the truth, not lie.