Can Reasoning Models Obfuscate Reasoning? Stress-Testing Chain-of-Thought Monitorability
By: Artur Zolkowski , Wen Xing , David Lindner and more
Potential Business Impact:
AI can hide its bad thinking to trick us.
Recent findings suggest that misaligned models may exhibit deceptive behavior, raising concerns about output trustworthiness. Chain-of-thought (CoT) is a promising tool for alignment monitoring: when models articulate their reasoning faithfully, monitors can detect and mitigate harmful behaviors before undesirable outcomes occur. However, a key uncertainty is: Can models obfuscate their CoT in order to pursue hidden adversarial objectives while evading detection? To answer this question and thus stress-test CoT monitorability, we develop a composable and quantifiable taxonomy of prompts to elicit CoT obfuscation. We evaluate both internal CoT (reasoning traces) and external CoT (prompted reasoning in outputs) using toy tasks and more realistic environments in SHADE-Arena. We show that: (i) CoT monitoring performs accurately and efficiently without obfuscation pressure. (ii) Under strong obfuscation pressure, some models successfully complete adversarial tasks while evading detection. (iii) Models do not obfuscate their internal CoT as much as their external CoT (under prompt pressure). These results suggest that while CoT provides valuable oversight in benign settings, robust deployment requires model-specific stress-testing of monitorability.
Similar Papers
When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors
Artificial Intelligence
Makes AI show its thinking to prevent harm.
Reasoning Models Don't Always Say What They Think
Computation and Language
Helps check if AI is thinking honestly.
Large language models can learn and generalize steganographic chain-of-thought under process supervision
Artificial Intelligence
Hides bad thinking so computers can't be caught.