How Difficulty-Aware Staged Reinforcement Learning Enhances LLMs' Reasoning Capabilities: A Preliminary Experimental Study
By: Yunjie Ji , Sitong Zhao , Xiaoyu Tian and more
Potential Business Impact:
Teaches AI to solve harder math and code problems.
Enhancing the reasoning capabilities of Large Language Models (LLMs) with efficiency and scalability remains a fundamental challenge in artificial intelligence research. This paper presents a rigorous experimental investigation into how difficulty-aware staged reinforcement learning (RL) strategies can substantially improve LLM reasoning performance. Through systematic analysis, we demonstrate that strategically selecting training data according to well-defined difficulty levels markedly enhances RL optimization. Moreover, we introduce a staged training methodology, progressively exposing models to increasingly challenging tasks, further amplifying reasoning capabilities. Our findings reveal significant cross-domain benefits when simultaneously training models on mathematical reasoning and code generation tasks. Notably, our proposed approach enables a 1.5B parameter model to achieve an accuracy of 42.3\% on the AIME-2024 benchmark, 89.5\% on the MATH-500 benchmark. These results underscore the efficacy of our method in advancing the reasoning proficiency of LLMs. We will open-source our datasets on GitHub and Hugging Face.
Similar Papers
Enhancing Math Reasoning in Small-sized LLMs via Preview Difficulty-Aware Intervention
Machine Learning (CS)
Teaches computers to solve hard math problems.
Curriculum Reinforcement Learning from Easy to Hard Tasks Improves LLM Reasoning
Machine Learning (CS)
Teaches computers to solve hard problems step-by-step.
A Survey of Reinforcement Learning for Large Reasoning Models
Computation and Language
Teaches computers to think and solve hard problems.