Adaptive-Boundary-Clipping GRPO: Ensuring Bounded Ratios for Stable and Generalizable Training
By: Chi Liu, Xin Chen
Potential Business Impact:
Makes AI better at math and learning.
Group Relative Policy Optimization (GRPO) has emerged as a popular algorithm for reinforcement learning with large language models (LLMs). However, upon analyzing its clipping mechanism, we argue that it is suboptimal in certain scenarios. With appropriate modifications, GRPO can be significantly enhanced to improve both flexibility and generalization. To this end, we propose Adaptive-Boundary-Clipping GRPO (ABC-GRPO), an asymmetric and adaptive refinement of the original GRPO framework. We demonstrate that ABC-GRPO achieves superior performance over standard GRPO on mathematical reasoning tasks using the Qwen3 LLMs. Moreover, ABC-GRPO maintains substantially higher entropy throughout training, thereby preserving the model's exploration capacity and mitigating premature convergence. The implementation code is available online to ease reproducibility https://github.com/chi2liu/ABC-GRPO.
Similar Papers
On the Hidden Objective Biases of Group-based Reinforcement Learning
Machine Learning (CS)
Fixes AI learning to be more fair and accurate.
Multi-Layer GRPO: Enhancing Reasoning and Self-Correction in Large Language Models
Machine Learning (CS)
Teaches computers to fix their own mistakes.
G$^2$RPO-A: Guided Group Relative Policy Optimization with Adaptive Guidance
Artificial Intelligence
Helps small AI learn to think better.