Can Large Reasoning Models Self-Train?
By: Sheikh Shafayat , Fahim Tajwar , Ruslan Salakhutdinov and more
Potential Business Impact:
Teaches computers math without needing answers.
Scaling the performance of large language models (LLMs) increasingly depends on methods that reduce reliance on human supervision. Reinforcement learning from automated verification offers an alternative, but it incurs scalability limitations due to dependency upon human-designed verifiers. Self-training, where the model's own judgment provides the supervisory signal, presents a compelling direction. We propose an online self-training reinforcement learning algorithm that leverages the model's self-consistency to infer correctness signals and train without any ground-truth supervision. We apply the algorithm to challenging mathematical reasoning tasks and show that it quickly reaches performance levels rivaling reinforcement-learning methods trained explicitly on gold-standard answers. Additionally, we analyze inherent limitations of the algorithm, highlighting how the self-generated proxy reward initially correlated with correctness can incentivize reward hacking, where confidently incorrect outputs are favored. Our results illustrate how self-supervised improvement can achieve significant performance gains without external labels, while also revealing its fundamental challenges.
Similar Papers
RLSR: Reinforcement Learning from Self Reward
Machine Learning (CS)
AI learns to solve problems by checking its own work.
Incentivizing LLMs to Self-Verify Their Answers
Machine Learning (CS)
Helps computers check their own math answers.
Self-rewarding correction for mathematical reasoning
Artificial Intelligence
Computers learn to fix their own mistakes.