Score: 1

G$^2$RPO-A: Guided Group Relative Policy Optimization with Adaptive Guidance

Published: August 18, 2025 | arXiv ID: 2508.13023v1

By: Yongxin Guo , Wenbo Deng , Zhenglin Cheng and more

Potential Business Impact:

Helps small AI learn to think better.

Reinforcement Learning with Verifiable Rewards (RLVR) has markedly enhanced the reasoning abilities of large language models (LLMs). Its success, however, largely depends on strong base models with rich world knowledge, yielding only modest improvements for small-size language models (SLMs). To address this limitation, we investigate Guided GRPO, which injects ground-truth reasoning steps into roll-out trajectories to compensate for SLMs' inherent weaknesses. Through a comprehensive study of various guidance configurations, we find that naively adding guidance delivers limited gains. These insights motivate G$^2$RPO-A, an adaptive algorithm that automatically adjusts guidance strength in response to the model's evolving training dynamics. Experiments on mathematical reasoning and code-generation benchmarks confirm that G$^2$RPO-A substantially outperforms vanilla GRPO. Our code and models are available at https://github.com/T-Lab-CUHKSZ/G2RPO-A.

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
Artificial Intelligence