Curriculum Reinforcement Learning from Easy to Hard Tasks Improves LLM Reasoning
By: Shubham Parashar , Shurui Gui , Xiner Li and more
Potential Business Impact:
Teaches computers to solve hard problems step-by-step.
We aim to improve the reasoning capabilities of language models via reinforcement learning (RL). Recent RL post-trained models like DeepSeek-R1 have demonstrated reasoning abilities on mathematical and coding tasks. However, prior studies suggest that using RL alone to improve reasoning on inherently difficult tasks is less effective. Here, we draw inspiration from curriculum learning and propose to schedule tasks from easy to hard (E2H), allowing LLMs to build reasoning skills gradually. Our method is termed E2H Reasoner. Empirically, we observe that, although easy tasks are important initially, fading them out through appropriate scheduling is essential in preventing overfitting. Theoretically, we establish convergence guarantees for E2H Reasoner within an approximate policy iteration framework. We derive finite-sample complexity bounds and show that when tasks are appropriately decomposed and conditioned, learning through curriculum stages requires fewer total samples than direct learning. Experiments across multiple domains show that E2H Reasoner significantly improves the reasoning ability of small LLMs (1.5B to 3B), which otherwise struggle when trained with vanilla RL alone, highlighting the effectiveness of our method.
Similar Papers
Reasoning Curriculum: Bootstrapping Broad LLM Reasoning from Math
Artificial Intelligence
Teaches computers to think better at many tasks.
How Difficulty-Aware Staged Reinforcement Learning Enhances LLMs' Reasoning Capabilities: A Preliminary Experimental Study
Computation and Language
Teaches AI to solve harder math and code problems.
h1: Bootstrapping LLMs to Reason over Longer Horizons via Reinforcement Learning
Machine Learning (CS)
Teaches computers to solve harder math problems.