GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning
By: Xiangxiang Chu , Hailang Huang , Xiao Zhang and more
Potential Business Impact:
Makes AI smarter and faster to train.
Reinforcement Learning (RL) can directly enhance the reasoning capabilities of large language models without extensive reliance on Supervised Fine-Tuning (SFT). In this work, we revisit the traditional Policy Gradient (PG) mechanism and propose a minimalist RL approach termed Group Policy Gradient (GPG). Unlike conventional methods, GPG directly optimize the original RL objective, thus obviating the need for surrogate loss functions. By eliminating the critic and reference models, avoiding KL divergence constraints, and addressing the advantage and gradient estimation bias, our approach significantly simplifies the training process compared to Group Relative Policy Optimization (GRPO). Our approach achieves superior performance without relying on auxiliary techniques or adjustments. As illustrated in Figure 1, extensive experiments demonstrate that our method not only reduces computational costs but also consistently outperforms GRPO across various unimodal and multimodal tasks. Our code is available at https://github.com/AMAP-ML/GPG.
Similar Papers
Group Policy Gradient
Machine Learning (CS)
Teaches computers faster without needing extra brainpower.
Training-Free Group Relative Policy Optimization
Computation and Language
Teaches computers to solve new problems better.
Rewarding the Unlikely: Lifting GRPO Beyond Distribution Sharpening
Machine Learning (CS)
Teaches computers to find rare, correct answers.