Beyond Correctness: Harmonizing Process and Outcome Rewards through RL Training
By: Chenlu Ye , Zhou Yu , Ziji Zhang and more
Potential Business Impact:
Teaches computers to reason better, step-by-step.
Reinforcement learning with verifiable rewards (RLVR) has emerged to be a predominant paradigm for mathematical reasoning tasks, offering stable improvements in reasoning ability. However, Outcome Reward Models (ORMs) in RLVR are too coarse-grained to distinguish flawed reasoning within correct answers or valid reasoning within incorrect answers. This lack of granularity introduces noisy and misleading gradients significantly and hinders further progress in reasoning process quality. While Process Reward Models (PRMs) offer fine-grained guidance for intermediate steps, they frequently suffer from inaccuracies and are susceptible to reward hacking. To resolve this dilemma, we introduce PRocess cOnsistency Filter (PROF), an effective data process curation method that harmonizes noisy, fine-grained process rewards with accurate, coarse-grained outcome rewards. Rather than naively blending PRM and ORM in the objective function (arXiv:archive/2506.18896), PROF leverages their complementary strengths through consistency-driven sample selection. Our approach retains correct responses with higher averaged process values and incorrect responses with lower averaged process values, while maintaining positive/negative training sample balance. Extensive experiments demonstrate that our method not only consistently improves the final accuracy over $4\%$ compared to the blending approaches, but also strengthens the quality of intermediate reasoning steps. Codes and training recipes are available at https://github.com/Chenluye99/PROF.
Similar Papers
A Survey of Process Reward Models: From Outcome Signals to Process Supervisions for Large Language Models
Computation and Language
Teaches computers to think step-by-step.
From Outcomes to Processes: Guiding PRM Learning from ORM for Inference-Time Alignment
Computation and Language
Makes AI understand and follow instructions better.
VRPRM: Process Reward Modeling via Visual Reasoning
Machine Learning (CS)
Teaches computers to think better with less data.