Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical Supervision
By: Tej Deep Pala , Panshul Sharma , Amir Zadeh and more
Potential Business Impact:
Helps computers solve math problems without making mistakes.
Large Language Models (LLMs) are prone to hallucination, especially during multi-hop and reasoning-intensive tasks such as mathematical problem solving. While Outcome Reward Models verify only final answers, Process Reward Models (PRMs) score each intermediate step to steer generation toward coherent solutions. We introduce PathFinder-PRM, a novel hierarchical, error-aware discriminative PRM that first classifies math and consistency errors at each step, then combines these fine-grained signals to estimate step correctness. To train PathFinder-PRM, we construct a 400K-sample dataset by enriching the human-annotated PRM800K corpus and RLHFlow Mistral traces with three-dimensional step-level labels. On PRMBench, PathFinder-PRM achieves a new state-of-the-art PRMScore of 67.7, outperforming the prior best (65.5) while using 3 times less data. When applied to reward guided greedy search, our model yields prm@8 48.3, a +1.5 point gain over the strongest baseline. These results demonstrate that decoupled error detection and reward estimation not only boost fine-grained error detection but also substantially improve end-to-end, reward-guided mathematical reasoning with greater data efficiency.
Similar Papers
Towards Hierarchical Multi-Step Reward Models for Enhanced Reasoning in Large Language Models
Computation and Language
Teaches computers to think better, step-by-step.
The Lessons of Developing Process Reward Models in Mathematical Reasoning
Computation and Language
Helps computers show their math work correctly.
GroundedPRM: Tree-Guided and Fidelity-Aware Process Reward Modeling for Step-Level Reasoning
Artificial Intelligence
Makes AI better at solving hard problems.