Outcome-Grounded Advantage Reshaping for Fine-Grained Credit Assignment in Mathematical Reasoning
By: Ziheng Li , Liu Kang , Feng Xiao and more
Potential Business Impact:
Helps AI learn to solve math problems better.
Group Relative Policy Optimization (GRPO) has emerged as a promising critic-free reinforcement learning paradigm for reasoning tasks. However, standard GRPO employs a coarse-grained credit assignment mechanism that propagates group-level rewards uniformly to to every token in a sequence, neglecting the varying contribution of individual reasoning steps. We address this limitation by introducing Outcome-grounded Advantage Reshaping (OAR), a fine-grained credit assignment mechanism that redistributes advantages based on how much each token influences the model's final answer. We instantiate OAR via two complementary strategies: (1) OAR-P, which estimates outcome sensitivity through counterfactual token perturbations, serving as a high-fidelity attribution signal; (2) OAR-G, which uses an input-gradient sensitivity proxy to approximate the influence signal with a single backward pass. These importance signals are integrated with a conservative Bi-Level advantage reshaping scheme that suppresses low-impact tokens and boosts pivotal ones while preserving the overall advantage mass. Empirical results on extensive mathematical reasoning benchmarks demonstrate that while OAR-P sets the performance upper bound, OAR-G achieves comparable gains with negligible computational overhead, both significantly outperforming a strong GRPO baseline, pushing the boundaries of critic-free LLM reasoning.
Similar Papers
G$^2$RPO-A: Guided Group Relative Policy Optimization with Adaptive Guidance
Artificial Intelligence
Helps small AI learn to think better.
PRPO: Aligning Process Reward with Outcome Reward in Policy Optimization
Machine Learning (CS)
Teaches AI to solve math problems better.
The Peril of Preference: Why GRPO fails on Ordinal Rewards
Artificial Intelligence
Teaches AI to learn better from mistakes.