Group Causal Policy Optimization for Post-Training Large Language Models
By: Ziyin Gu , Jingyao Wang , Ran Zuo and more
Potential Business Impact:
Makes AI better at choosing the best answers.
Recent advances in large language models (LLMs) have broadened their applicability across diverse tasks, yet specialized domains still require targeted post training. Among existing methods, Group Relative Policy Optimization (GRPO) stands out for its efficiency, leveraging groupwise relative rewards while avoiding costly value function learning. However, GRPO treats candidate responses as independent, overlooking semantic interactions such as complementarity and contradiction. To address this challenge, we first introduce a Structural Causal Model (SCM) that reveals hidden dependencies among candidate responses induced by conditioning on a final integrated output forming a collider structure. Then, our causal analysis leads to two insights: (1) projecting responses onto a causally informed subspace improves prediction quality, and (2) this projection yields a better baseline than query only conditioning. Building on these insights, we propose Group Causal Policy Optimization (GCPO), which integrates causal structure into optimization through two key components: a causally informed reward adjustment and a novel KL regularization term that aligns the policy with a causally projected reference distribution. Comprehensive experimental evaluations demonstrate that GCPO consistently surpasses existing methods, including GRPO across multiple reasoning benchmarks.
Similar Papers
GVPO: Group Variance Policy Optimization for Large Language Model Post-Training
Artificial Intelligence
Makes AI learn better and more reliably.
GTPO: Trajectory-Based Policy Optimization in Large Language Models
Machine Learning (CS)
Makes AI smarter by fixing its mistakes.
Training-Free Group Relative Policy Optimization
Computation and Language
Teaches computers to solve new problems better.