Act Only When It Pays: Efficient Reinforcement Learning for LLM Reasoning via Selective Rollouts
By: Haizhong Zheng , Yang Zhou , Brian R. Bartoldson and more
Potential Business Impact:
Makes AI learn faster by skipping boring parts.
Reinforcement learning, such as PPO and GRPO, has powered recent breakthroughs in LLM reasoning. Scaling rollout to sample more prompts enables models to selectively use higher-quality data for training, which can stabilize RL training and improve model performance. However, this comes at the cost of significant computational overhead. In this paper, we show that a substantial portion of this overhead can be avoided by skipping uninformative prompts before rollout. Our analysis of reward dynamics reveals a strong temporal consistency in prompt value: prompts that are uninformative in one epoch of training are likely to remain uninformative in future epochs. Based on these insights, we propose GRESO (GRPO with Efficient Selective Rollout), an online, lightweight pre-rollout filtering algorithm that predicts and skips uninformative prompts using reward training dynamics. By evaluating GRESO on a broad range of math reasoning benchmarks and models, such as Qwen2.5-Math-1.5B, DeepSeek-R1-Distill-Qwen-1.5B, and Qwen2.5-Math-7B, we show that GRESO achieves up to 2.4x wall-clock time speedup in rollout and up to 2.0x speedup in total training time without accuracy degradation.
Similar Papers
A Minimalist Approach to LLM Reasoning: from Rejection Sampling to Reinforce
Machine Learning (CS)
Teaches computers to learn better from mistakes.
XRPO: Pushing the limits of GRPO with Targeted Exploration and Exploitation
Machine Learning (CS)
Teaches AI to solve harder problems faster.
Explore Data Left Behind in Reinforcement Learning for Reasoning Language Models
Computation and Language
Teaches computers to solve math problems better.