Group Policy Gradient
By: Junhua Chen , Zixi Zhang , Hantao Zhong and more
Potential Business Impact:
Teaches computers faster without needing extra brainpower.
We introduce Group Policy Gradient (GPG), a family of critic-free policy-gradient estimators for general MDPs. Inspired by the success of GRPO's approach in Reinforcement Learning from Human Feedback (RLHF), GPG replaces a learned value function with a group-based Monte Carlo advantage estimator, removing the memory, compute, and hyperparameter costs of training a critic while preserving PPO's clipped-objective structure. We prove the consistency of the GPG estimator, analyze the bias-variance tradeoffs, and demonstrate empirically that GPG matches or outperforms PPO on standard benchmarks. GPG makes better use of parallel simulations, which, together with its critic-free design, results in more efficient use of computational resources than PPO.
Similar Papers
GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning
Machine Learning (CS)
Makes AI smarter and faster to train.
Group-in-Group Policy Optimization for LLM Agent Training
Machine Learning (CS)
Helps AI agents learn better from many steps.
GPG: Generalized Policy Gradient Theorem for Transformer-based Policies
Machine Learning (CS)
Teaches AI to learn better and faster.