Zero Reinforcement Learning Towards General Domains
By: Yuyuan Zeng , Yufei Huang , Can Xu and more
Potential Business Impact:
Teaches computers to think better everywhere.
Zero Reinforcement Learning (Zero-RL) has proven to be an effective approach for enhancing the reasoning capabilities of large language models (LLMs) by directly applying reinforcement learning with verifiable rewards on pretrained models, without the need for a supervised fine-tuning phase. However, current research on zero-RL primarily focuses on domains with easily verifiable reward signals, such as mathematics, programming, and other reasoning tasks. The challenge of eliciting reasoning abilities in more diverse scenarios, where verification is not straightforward, remains underexplored. To address this gap, we propose a novel zero-RL paradigm designed to improve a model's reasoning ability across both verifiable and non-verifiable domains. By combining verifiable rewards with a generative reward model, we conduct multi-task zero-RL training across both domains, facilitating the transfer of reasoning capabilities between them. Furthermore, to mitigate reward hacking in the generative reward model, we design a smooth length penalty that encourages the generation of more comprehensive thinking tokens in general domains. Experimental results on Qwen3-8B-Base and Qwen3-14B-Base demonstrate that our approach achieves superior reasoning performance, not only on tasks requiring extensive reasoning but also on more general tasks.
Similar Papers
Absolute Zero: Reinforced Self-play Reasoning with Zero Data
Machine Learning (CS)
AI teaches itself to solve hard problems.
Revisiting Reinforcement Learning for LLM Reasoning from A Cross-Domain Perspective
Machine Learning (CS)
Teaches computers to think better in many subjects.
PretrainZero: Reinforcement Active Pretraining
Computation and Language
Teaches computers to learn like humans.