Score: 0

When Can Large Reasoning Models Save Thinking? Mechanistic Analysis of Behavioral Divergence in Reasoning

Published: May 21, 2025 | arXiv ID: 2505.15276v1

By: Rongzhi Zhu , Yi Liu , Zequn Sun and more

Potential Business Impact:

Makes smart computers think less, faster, and more accurately.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large reasoning models (LRMs) have significantly advanced performance on complex tasks, yet their tendency to overthink introduces inefficiencies. This study investigates the internal mechanisms of reinforcement learning (RL)-trained LRMs when prompted to save thinking, revealing three distinct thinking modes: no thinking (NT), explicit thinking (ET), and implicit thinking (IT). Through comprehensive analysis of confidence in thinking termination, attention from thinking to generation, and attentional focus on input sections, we uncover key factors influencing the reasoning behaviors. We further find that NT reduces output length at the cost of accuracy, while ET and IT maintain accuracy with reduced response length. Our findings expose fundamental inconsistencies in RL-optimized LRMs, necessitating adaptive improvements for reliable efficiency.

Country of Origin
🇨🇳 China

Page Count
11 pages

Category
Computer Science:
Artificial Intelligence