RePro: Training Language Models to Faithfully Recycle the Web for Pretraining
By: Zichun Yu, Chenyan Xiong
Potential Business Impact:
Makes AI smarter with less data.
High-quality pretraining data is the fossil fuel of large language models (LLMs), yet its reserves are running low for frontier models. In this paper, we introduce RePro, a novel web recycling method that trains a relatively small LM with reinforcement learning to generate effective and faithful rephrasings of pretraining data. Specifically, we design one quality reward and three faithfulness rewards, optimizing the LM rephraser to convert organic data into high-quality rephrasings while maintaining its core semantics and structure. In our experiment, we train a 4B rephraser to recycle 72B tokens sampled from DCLM-RefinedWeb. Pretraining results on 400M and 1.4B models demonstrate that RePro delivers 4.7%-14.0% relative accuracy gains over organic-only baseline on 22 downstream tasks. RePro also outperforms ReWire, the state-of-the-art web recycling method that prompts a 70B rephraser, as well as the organic baseline with a 4x larger data pool. Experiments with different amounts of recycled data highlight that RePro improves organic data efficiency by 2-3x. Individual and distributional analyses validate that RePro preserves more critical information and faithfully reflects the characteristics of organic data compared to prompting-based methods. Together, these results show that RePro provides an efficient and controllable path to effectively harness the fossil fuel of LLM pretraining. We open-source our code, rephraser, and recycled data at https://github.com/cxcscmu/RePro.
Similar Papers
Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models
Computation and Language
Makes computer brains learn better from old internet text.
RePro: Leveraging Large Language Models for Semi-Automated Reproduction of Networking Research Results
Networking and Internet Architecture
Helps computers rebuild network programs from papers.
ReProCon: Scalable and Resource-Efficient Few-Shot Biomedical Named Entity Recognition
Computation and Language
Helps computers understand rare medical words.