A Pragmatic Way to Measure Chain-of-Thought Monitorability
By: Scott Emmons , Roland S. Zimmermann , David K. Elson and more
Potential Business Impact:
Helps humans check AI's thinking steps.
While Chain-of-Thought (CoT) monitoring offers a unique opportunity for AI safety, this opportunity could be lost through shifts in training practices or model architecture. To help preserve monitorability, we propose a pragmatic way to measure two components of it: legibility (whether the reasoning can be followed by a human) and coverage (whether the CoT contains all the reasoning needed for a human to also produce the final output). We implement these metrics with an autorater prompt that enables any capable LLM to compute the legibility and coverage of existing CoTs. After sanity-checking our prompted autorater with synthetic CoT degradations, we apply it to several frontier models on challenging benchmarks, finding that they exhibit high monitorability. We present these metrics, including our complete autorater prompt, as a tool for developers to track how design decisions impact monitorability. While the exact prompt we share is still a preliminary version under ongoing development, we are sharing it now in the hopes that others in the community will find it useful. Our method helps measure the default monitorability of CoT - it should be seen as a complement, not a replacement, for the adversarial stress-testing needed to test robustness against deliberately evasive models.
Similar Papers
Measuring Chain-of-Thought Monitorability Through Faithfulness and Verbosity
Machine Learning (CS)
Shows how computers think to keep them safe.
Investigating CoT Monitorability in Large Reasoning Models
Computation and Language
Helps AI watch itself for bad behavior.
A Concrete Roadmap towards Safety Cases based on Chain-of-Thought Monitoring
Machine Learning (CS)
Keeps AI safe by watching its thinking.