Emergent Hierarchical Reasoning in LLMs through Reinforcement Learning
By: Haozhe Wang , Qixin Xu , Che Liu and more
Potential Business Impact:
Teaches computers to think smarter, like humans.
Reinforcement Learning (RL) has proven highly effective at enhancing the complex reasoning abilities of Large Language Models (LLMs), yet underlying mechanisms driving this success remain largely opaque. Our analysis reveals that puzzling phenomena like ``aha moments", ``length-scaling'' and entropy dynamics are not disparate occurrences but hallmarks of an emergent reasoning hierarchy, akin to the separation of high-level strategic planning from low-level procedural execution in human cognition. We uncover a compelling two-phase dynamic: initially, a model is constrained by procedural correctness and must improve its low-level skills. The learning bottleneck then decisively shifts, with performance gains being driven by the exploration and mastery of high-level strategic planning. This insight exposes a core inefficiency in prevailing RL algorithms like GRPO, which apply optimization pressure agnostically and dilute the learning signal across all tokens. To address this, we propose HIerarchy-Aware Credit Assignment (HICRA), an algorithm that concentrates optimization efforts on high-impact planning tokens. HICRA significantly outperforms strong baselines, demonstrating that focusing on this strategic bottleneck is key to unlocking advanced reasoning. Furthermore, we validate semantic entropy as a superior compass for measuring strategic exploration over misleading metrics such as token-level entropy.
Similar Papers
Emergent Hierarchical Reasoning in LLMs through Reinforcement Learning
Artificial Intelligence
Teaches computers to think smarter, like humans.
Efficient Reinforcement Learning with Semantic and Token Entropy for LLM Reasoning
Artificial Intelligence
Makes AI smarter and better at solving problems.
Revisiting LLM Reasoning via Information Bottleneck
Artificial Intelligence
Makes computers think better at math problems.