Score: 1

When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors

Published: July 7, 2025 | arXiv ID: 2507.05246v1

By: Scott Emmons , Erik Jenner , David K. Elson and more

BigTech Affiliations: Google

Potential Business Impact:

Makes AI show its thinking to prevent harm.

Business Areas:
Simulation Software

While chain-of-thought (CoT) monitoring is an appealing AI safety defense, recent work on "unfaithfulness" has cast doubt on its reliability. These findings highlight an important failure mode, particularly when CoT acts as a post-hoc rationalization in applications like auditing for bias. However, for the distinct problem of runtime monitoring to prevent severe harm, we argue the key property is not faithfulness but monitorability. To this end, we introduce a conceptual framework distinguishing CoT-as-rationalization from CoT-as-computation. We expect that certain classes of severe harm will require complex, multi-step reasoning that necessitates CoT-as-computation. Replicating the experimental setups of prior work, we increase the difficulty of the bad behavior to enforce this necessity condition; this forces the model to expose its reasoning, making it monitorable. We then present methodology guidelines to stress-test CoT monitoring against deliberate evasion. Applying these guidelines, we find that models can learn to obscure their intentions, but only when given significant help, such as detailed human-written strategies or iterative optimization against the monitor. We conclude that, while not infallible, CoT monitoring offers a substantial layer of defense that requires active protection and continued stress-testing.

Country of Origin
🇺🇸 United States

Page Count
70 pages

Category
Computer Science:
Artificial Intelligence