Prioritized Replay for RL Post-training
By: Mehdi Fatemi
Potential Business Impact:
Teaches AI to learn harder problems faster.
We introduce a problem-level prioritization framework for RL post-training of large language models. Building on insights from prioritized replay in deep RL, as well as prior observations that rollouts with intermediate success rates tend to produce stronger learning signals under methods such as GRPO, our approach selects problems according to a simple, model-driven priority score derived from empirical success statistics. In contrast to conventional curriculum strategies that emphasize easier tasks early in training, the resulting schedule naturally focuses training on problems that are neither consistently solved nor consistently failed, while deprioritizing those that contribute little gradient information. The method yields a continuously adapting and automatic prioritization process that requires no predefined difficulty tiers, auxiliary predictors, or external labels. We further introduce lightweight mechanisms for practical deployment, including heap-based prioritized sampling and periodic retesting of solved and unsolved problems to mitigate starvation and forgetting. Overall, the approach offers a principled and scalable alternative to manually designed curricula while aligning data selection directly with the dynamics of GRPO-based post-training.
Similar Papers
On the Hidden Objective Biases of Group-based Reinforcement Learning
Machine Learning (CS)
Fixes AI learning to be more fair and accurate.
GRPO-RM: Fine-Tuning Representation Models via GRPO-Driven Reinforcement Learning
Machine Learning (CS)
Teaches AI to learn better from data.
SuRe: Surprise-Driven Prioritised Replay for Continual LLM Learning
Machine Learning (CS)
Teaches AI to learn new things without forgetting old ones.