FreePRM: Training Process Reward Models Without Ground Truth Process Labels
By: Lin Sun , Chuang Liu , Xiaofeng Ma and more
Potential Business Impact:
Teaches AI to learn without needing every step.
Recent advancements in Large Language Models (LLMs) have demonstrated that Process Reward Models (PRMs) play a crucial role in enhancing model performance. However, training PRMs typically requires step-level labels, either manually annotated or automatically generated, which can be costly and difficult to obtain at scale. To address this challenge, we introduce FreePRM, a weakly supervised framework for training PRMs without access to ground-truth step-level labels. FreePRM first generates pseudo step-level labels based on the correctness of final outcome, and then employs Buffer Probability to eliminate impact of noise inherent in pseudo labeling. Experimental results show that FreePRM achieves an average F1 score of 53.0% on ProcessBench, outperforming fully supervised PRM trained on Math-Shepherd by +24.1%. Compared to other open-source PRMs, FreePRM outperforms upon RLHFlow-PRM-Mistral-8B (28.4%) by +24.6%, EurusPRM (31.3%) by +21.7%, and Skywork-PRM-7B (42.1%) by +10.9%. This work introduces a new paradigm in PRM training, significantly reducing reliance on costly step-level annotations while maintaining strong performance.
Similar Papers
Efficient Process Reward Model Training via Active Learning
Machine Learning (CS)
Teaches computers to learn faster with less work.
GroundedPRM: Tree-Guided and Fidelity-Aware Process Reward Modeling for Step-Level Reasoning
Artificial Intelligence
Makes AI better at solving hard problems.
DreamPRM-Code: Function-as-Step Process Reward Model with Label Correction for LLM Coding
Machine Learning (CS)
Helps computers write better code by breaking it down.