Score: 0

PRPO: Aligning Process Reward with Outcome Reward in Policy Optimization

Published: January 12, 2026 | arXiv ID: 2601.07182v1

By: Ruiyi Ding , Yongxuan Lv , Xianhui Meng and more

Potential Business Impact:

Teaches AI to solve math problems better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Policy optimization for large language models often suffers from sparse reward signals in multi-step reasoning tasks. Critic-free methods like GRPO assign a single normalized outcome reward to all tokens, providing limited guidance for intermediate reasoning . While Process Reward Models (PRMs) offer dense feedback, they risk premature collapse when used alone, as early low-reward tokens can drive policies toward truncated outputs. We introduce Process Relative Policy Optimization (PRPO), which combines outcome reliability with process-level guidance in a critic-free framework. PRPO segments reasoning sequences based on semantic clues, normalizes PRM scores into token-level advantages, and aligns their distribution with outcome advantages through location-parameter shift. On MATH500, PRPO improves Qwen2.5-Math-1.5B accuracy from 61.2% to 64.4% over GRPO using only eight rollouts and no value network, demonstrating efficient fine-grained credit assignment within critic-free optimization.

Country of Origin
🇨🇳 China

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)