Explore Data Left Behind in Reinforcement Learning for Reasoning Language Models
By: Chenxi Liu , Junjie Liang , Yuqi Jia and more
Potential Business Impact:
Teaches computers to solve math problems better.
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as an effective approach for improving the reasoning abilities of large language models (LLMs). The Group Relative Policy Optimization (GRPO) family has demonstrated strong performance in training LLMs with RLVR. However, as models train longer and scale larger, more training prompts become residual prompts, those with zero variance rewards that provide no training signal. Consequently, fewer prompts contribute to training, reducing diversity and hindering effectiveness. To fully exploit these residual prompts, we propose the Explore Residual Prompts in Policy Optimization (ERPO) framework, which encourages exploration on residual prompts and reactivates their training signals. ERPO maintains a history tracker for each prompt and adaptively increases the sampling temperature for residual prompts that previously produced all correct responses. This encourages the model to generate more diverse reasoning traces, introducing incorrect responses that revive training signals. Empirical results on the Qwen2.5 series demonstrate that ERPO consistently surpasses strong baselines across multiple mathematical reasoning benchmarks.
Similar Papers
ExGRPO: Learning to Reason from Experience
Machine Learning (CS)
Teaches computers to learn better from mistakes.
EEPO: Exploration-Enhanced Policy Optimization via Sample-Then-Forget
Computation and Language
Helps AI learn new things by forgetting and trying again.
RePO: Replay-Enhanced Policy Optimization
Computation and Language
Makes AI smarter with less computer power.