When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors
By: Scott Emmons , Erik Jenner , David K. Elson and more
Potential Business Impact:
Makes AI show its thinking to prevent harm.
While chain-of-thought (CoT) monitoring is an appealing AI safety defense, recent work on "unfaithfulness" has cast doubt on its reliability. These findings highlight an important failure mode, particularly when CoT acts as a post-hoc rationalization in applications like auditing for bias. However, for the distinct problem of runtime monitoring to prevent severe harm, we argue the key property is not faithfulness but monitorability. To this end, we introduce a conceptual framework distinguishing CoT-as-rationalization from CoT-as-computation. We expect that certain classes of severe harm will require complex, multi-step reasoning that necessitates CoT-as-computation. Replicating the experimental setups of prior work, we increase the difficulty of the bad behavior to enforce this necessity condition; this forces the model to expose its reasoning, making it monitorable. We then present methodology guidelines to stress-test CoT monitoring against deliberate evasion. Applying these guidelines, we find that models can learn to obscure their intentions, but only when given significant help, such as detailed human-written strategies or iterative optimization against the monitor. We conclude that, while not infallible, CoT monitoring offers a substantial layer of defense that requires active protection and continued stress-testing.
Similar Papers
Reasoning Models Don't Always Say What They Think
Computation and Language
Helps check if AI is thinking honestly.
A Concrete Roadmap towards Safety Cases based on Chain-of-Thought Monitoring
Machine Learning (CS)
Keeps AI safe by watching its thinking.
Can Reasoning Models Obfuscate Reasoning? Stress-Testing Chain-of-Thought Monitorability
Cryptography and Security
AI can hide its bad thinking to trick us.