HRM-Agent: Training a recurrent reasoning model in dynamic environments using reinforcement learning
By: Long H Dang, David Rawlinson
Potential Business Impact:
Helps robots learn to solve problems faster.
The Hierarchical Reasoning Model (HRM) has impressive reasoning abilities given its small size, but has only been applied to supervised, static, fully-observable problems. One of HRM's strengths is its ability to adapt its computational effort to the difficulty of the problem. However, in its current form it cannot integrate and reuse computation from previous time-steps if the problem is dynamic, uncertain or partially observable, or be applied where the correct action is undefined, characteristics of many real-world problems. This paper presents HRM-Agent, a variant of HRM trained using only reinforcement learning. We show that HRM can learn to navigate to goals in dynamic and uncertain maze environments. Recent work suggests that HRM's reasoning abilities stem from its recurrent inference process. We explore the dynamics of the recurrent inference process and find evidence that it is successfully reusing computation from earlier environment time-steps.
Similar Papers
Are Your Reasoning Models Reasoning or Guessing? A Mechanistic Analysis of Hierarchical Reasoning Models
Artificial Intelligence
Helps AI solve puzzles by guessing smarter.
Towards Hierarchical Multi-Step Reward Models for Enhanced Reasoning in Large Language Models
Computation and Language
Teaches computers to think better, step-by-step.
Emergent Hierarchical Reasoning in LLMs through Reinforcement Learning
Artificial Intelligence
Teaches computers to think smarter, like humans.