Webscale-RL: Automated Data Pipeline for Scaling RL Data to Pretraining Levels
By: Zhepeng Cen , Haolin Chen , Shiyu Wang and more
Potential Business Impact:
Teaches computers to learn better with less data.
Large Language Models (LLMs) have achieved remarkable success through imitation learning on vast text corpora, but this paradigm creates a training-generation gap and limits robust reasoning. Reinforcement learning (RL) offers a more data-efficient solution capable of bridging this gap, yet its application has been constrained by a critical data bottleneck: existing RL datasets are orders of magnitude smaller and less diverse than web-scale pre-training corpora. To address this, we introduce the Webscale-RL pipeline, a scalable data engine that systematically converts large-scale pre-training documents into millions of diverse, verifiable question-answer pairs for RL. Using this pipeline, we construct the Webscale-RL dataset, containing 1.2 million examples across more than 9 domains. Our experiments show that the model trained on this dataset significantly outperforms continual pretraining and strong data refinement baselines across a suite of benchmarks. Notably, RL training with our dataset proves substantially more efficient, achieving the performance of continual pre-training with up to 100$\times$ fewer tokens. Our work presents a viable path toward scaling RL to pre-training levels, enabling more capable and efficient language models.
Similar Papers
Reinforcement Learning on Pre-Training Data
Computation and Language
Teaches computers to learn better from books.
PipelineRL: Faster On-policy Reinforcement Learning for Long Sequence Generatio
Machine Learning (CS)
Trains AI faster and smarter using new methods.
DeepResearcher: Scaling Deep Research via Reinforcement Learning in Real-world Environments
Artificial Intelligence
Helps computers learn to research the real internet.