Score: 2

GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning

Published: April 3, 2025 | arXiv ID: 2504.02546v3

By: Xiangxiang Chu , Hailang Huang , Xiao Zhang and more

BigTech Affiliations: Alibaba

Potential Business Impact:

Makes AI smarter and faster to train.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reinforcement Learning (RL) can directly enhance the reasoning capabilities of large language models without extensive reliance on Supervised Fine-Tuning (SFT). In this work, we revisit the traditional Policy Gradient (PG) mechanism and propose a minimalist RL approach termed Group Policy Gradient (GPG). Unlike conventional methods, GPG directly optimize the original RL objective, thus obviating the need for surrogate loss functions. By eliminating the critic and reference models, avoiding KL divergence constraints, and addressing the advantage and gradient estimation bias, our approach significantly simplifies the training process compared to Group Relative Policy Optimization (GRPO). Our approach achieves superior performance without relying on auxiliary techniques or adjustments. As illustrated in Figure 1, extensive experiments demonstrate that our method not only reduces computational costs but also consistently outperforms GRPO across various unimodal and multimodal tasks. Our code is available at https://github.com/AMAP-ML/GPG.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)