PRPO: Aligning Process Reward with Outcome Reward in Policy Optimization
By: Ruiyi Ding , Yongxuan Lv , Xianhui Meng and more
Potential Business Impact:
Teaches AI to solve math problems better.
Policy optimization for large language models often suffers from sparse reward signals in multi-step reasoning tasks. Critic-free methods like GRPO assign a single normalized outcome reward to all tokens, providing limited guidance for intermediate reasoning . While Process Reward Models (PRMs) offer dense feedback, they risk premature collapse when used alone, as early low-reward tokens can drive policies toward truncated outputs. We introduce Process Relative Policy Optimization (PRPO), which combines outcome reliability with process-level guidance in a critic-free framework. PRPO segments reasoning sequences based on semantic clues, normalizes PRM scores into token-level advantages, and aligns their distribution with outcome advantages through location-parameter shift. On MATH500, PRPO improves Qwen2.5-Math-1.5B accuracy from 61.2% to 64.4% over GRPO using only eight rollouts and no value network, demonstrating efficient fine-grained credit assignment within critic-free optimization.
Similar Papers
GRPO is Secretly a Process Reward Model
Machine Learning (CS)
Makes AI learn faster and better.
PROPA: Toward Process-level Optimization in Visual Reasoning via Reinforcement Learning
CV and Pattern Recognition
Teaches computers to think through problems step-by-step.
From Outcomes to Processes: Guiding PRM Learning from ORM for Inference-Time Alignment
Computation and Language
Makes AI understand and follow instructions better.