PipelineRL: Faster On-policy Reinforcement Learning for Long Sequence Generatio
By: Alexandre Piché , Ehsan Kamaloo , Rafael Pardinas and more
Potential Business Impact:
Trains AI faster and smarter using new methods.
Reinforcement Learning (RL) is increasingly utilized to enhance the reasoning capabilities of Large Language Models (LLMs). However, effectively scaling these RL methods presents significant challenges, primarily due to the difficulty in maintaining high AI accelerator utilization without generating stale, off-policy data that harms common RL algorithms. This paper introduces PipelineRL, an approach designed to achieve a superior trade-off between hardware efficiency and data on-policyness for LLM training. PipelineRL employs concurrent asynchronous data generation and model training, distinguished by the novel in-flight weight updates. This mechanism allows the LLM generation engine to receive updated model weights with minimal interruption during the generation of token sequences, thereby maximizing both the accelerator utilization and the freshness of training data. Experiments conducted on long-form reasoning tasks using 128 H100 GPUs demonstrate that PipelineRL achieves approximately $\sim 2x$ faster learning compared to conventional RL baselines while maintaining highly on-policy training data. A scalable and modular open-source implementation of PipelineRL is also released as a key contribution.
Similar Papers
Webscale-RL: Automated Data Pipeline for Scaling RL Data to Pretraining Levels
Computation and Language
Teaches computers to learn better with less data.
RollPacker: Mitigating Long-Tail Rollouts for Fast, Synchronous RL Post-Training
Distributed, Parallel, and Cluster Computing
Makes AI learn faster by fixing computer work.
History Rhymes: Accelerating LLM Reinforcement Learning with RhymeRL
Machine Learning (CS)
Makes AI learn faster and use computers better.