rSIM: Incentivizing Reasoning Capabilities of LLMs via Reinforced Strategy Injection
By: Sijia Chen, Baochun Li, Di Niu
Large language models (LLMs) are post-trained through reinforcement learning (RL) to evolve into Reasoning Language Models (RLMs), where the hallmark of this advanced reasoning is ``aha'' moments when they start to perform strategies, such as self-reflection and deep thinking, within chain of thoughts (CoTs). Motivated by this, this paper proposes a novel reinforced strategy injection mechanism (rSIM), that enables any LLM to become an RLM by employing a small planner to guide the LLM's CoT through the adaptive injection of reasoning strategies. To achieve this, the planner (leader agent) is jointly trained with an LLM (follower agent) using multi-agent RL (MARL), based on a leader-follower framework and straightforward rule-based rewards. Experimental results show that rSIM enables Qwen2.5-0.5B to become an RLM and significantly outperform Qwen2.5-14B. Moreover, the planner is generalizable: it only needs to be trained once and can be applied as a plug-in to substantially improve the reasoning capabilities of existing LLMs. In addition, the planner supports continual learning across various tasks, allowing its planning abilities to gradually improve and generalize to a wider range of problems.
Similar Papers
ReaLM: Reflection-Enhanced Autonomous Reasoning with Small Language Models
Computation and Language
Teaches small computers to think better on their own.
Reasoning Under 1 Billion: Memory-Augmented Reinforcement Learning for Large Language Models
Machine Learning (CS)
Helps small AI learn to think better.
R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning
Artificial Intelligence
Lets computers find answers on the internet.