Reasoning Models Will Blatantly Lie About Their Reasoning
By: William Walden
It has been shown that Large Reasoning Models (LRMs) may not *say what they think*: they do not always volunteer information about how certain parts of the input influence their reasoning. But it is one thing for a model to *omit* such information and another, worse thing to *lie* about it. Here, we extend the work of Chen et al. (2025) to show that LRMs will do just this: they will flatly deny relying on hints provided in the prompt in answering multiple choice questions -- even when directly asked to reflect on unusual (i.e. hinted) prompt content, even when allowed to use hints, and even though experiments *show* them to be using the hints. Our results thus have discouraging implications for CoT monitoring and interpretability.
Similar Papers
Reasoning Models Reason Well, Until They Don't
Artificial Intelligence
Makes smart computers better at solving hard problems.
Thinking Out Loud: Do Reasoning Models Know When They're Right?
Computation and Language
Makes AI better at knowing when it's wrong.
Large Reasoning Models are not thinking straight: on the unreliability of thinking trajectories
Machine Learning (CS)
Models get stuck thinking too much, ignore right answers.