Score: 0

Group Policy Gradient

Published: October 4, 2025 | arXiv ID: 2510.03679v1

By: Junhua Chen , Zixi Zhang , Hantao Zhong and more

Potential Business Impact:

Teaches computers faster without needing extra brainpower.

Business Areas:
Guides Media and Entertainment

We introduce Group Policy Gradient (GPG), a family of critic-free policy-gradient estimators for general MDPs. Inspired by the success of GRPO's approach in Reinforcement Learning from Human Feedback (RLHF), GPG replaces a learned value function with a group-based Monte Carlo advantage estimator, removing the memory, compute, and hyperparameter costs of training a critic while preserving PPO's clipped-objective structure. We prove the consistency of the GPG estimator, analyze the bias-variance tradeoffs, and demonstrate empirically that GPG matches or outperforms PPO on standard benchmarks. GPG makes better use of parallel simulations, which, together with its critic-free design, results in more efficient use of computational resources than PPO.

Country of Origin
🇬🇧 United Kingdom

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)