Language Self-Play For Data-Free Training
By: Jakub Grudzien Kuba , Mengting Gu , Qi Ma and more
Potential Business Impact:
Computers learn to be smarter by playing games.
Large language models (LLMs) have advanced rapidly in recent years, driven by scale, abundant high-quality training data, and reinforcement learning. Yet this progress faces a fundamental bottleneck: the need for ever more data from which models can continue to learn. In this work, we propose a reinforcement learning approach that removes this dependency by enabling models to improve without additional data. Our method leverages a game-theoretic framework of self-play, where a model's capabilities are cast as performance in a competitive game and stronger policies emerge by having the model play against itself - a process we call Language Self-Play (LSP). Experiments with Llama-3.2-3B-Instruct on instruction-following benchmarks show that pretrained models can not only enhance their performance on challenging tasks through self-play alone, but can also do so more effectively than data-driven baselines.
Similar Papers
The Path of Self-Evolving Large Language Models: Achieving Data-Efficient Learning via Intrinsic Feedback
Computation and Language
AI learns better with less help.
Towards Understanding Self-play for LLM Reasoning
Machine Learning (CS)
Teaches computers to solve math problems better.
Generalising from Self-Produced Data: Model Training Beyond Human Constraints
Artificial Intelligence
AI learns by doing, not just reading.