When Can Large Reasoning Models Save Thinking? Mechanistic Analysis of Behavioral Divergence in Reasoning
By: Rongzhi Zhu , Yi Liu , Zequn Sun and more
Potential Business Impact:
Makes smart computers think less, faster, and more accurately.
Large reasoning models (LRMs) have significantly advanced performance on complex tasks, yet their tendency to overthink introduces inefficiencies. This study investigates the internal mechanisms of reinforcement learning (RL)-trained LRMs when prompted to save thinking, revealing three distinct thinking modes: no thinking (NT), explicit thinking (ET), and implicit thinking (IT). Through comprehensive analysis of confidence in thinking termination, attention from thinking to generation, and attentional focus on input sections, we uncover key factors influencing the reasoning behaviors. We further find that NT reduces output length at the cost of accuracy, while ET and IT maintain accuracy with reduced response length. Our findings expose fundamental inconsistencies in RL-optimized LRMs, necessitating adaptive improvements for reliable efficiency.
Similar Papers
Large Reasoning Models are not thinking straight: on the unreliability of thinking trajectories
Machine Learning (CS)
Models get stuck thinking too much, ignore right answers.
Think or Not? Exploring Thinking Efficiency in Large Reasoning Models via an Information-Theoretic Lens
Computation and Language
Makes smart computers think shorter, faster, and better.
Don't Overthink It: A Survey of Efficient R1-style Large Reasoning Models
Artificial Intelligence
Makes AI think faster without losing accuracy.