Better Process Supervision with Bi-directional Rewarding Signals
By: Wenxiang Chen , Wei He , Zhiheng Xi and more
Potential Business Impact:
Helps AI solve hard math problems better.
Process supervision, i.e., evaluating each step, is critical for complex large language model (LLM) reasoning and test-time searching with increased inference compute. Existing approaches, represented by process reward models (PRMs), primarily focus on rewarding signals up to the current step, exhibiting a one-directional nature and lacking a mechanism to model the distance to the final target. To address this problem, we draw inspiration from the A* algorithm, which states that an effective supervisory signal should simultaneously consider the incurred cost and the estimated cost for reaching the target. Building on this key insight, we introduce BiRM, a novel process supervision model that not only evaluates the correctness of previous steps but also models the probability of future success. We conduct extensive experiments on mathematical reasoning tasks and demonstrate that BiRM provides more precise evaluations of LLM reasoning steps, achieving an improvement of 3.1% on Gaokao2023 over PRM under the Best-of-N sampling method. Besides, in search-based strategies, BiRM provides more comprehensive guidance and outperforms ORM by 5.0% and PRM by 3.8% respectively on MATH-500.
Similar Papers
The Bidirectional Process Reward Model
Computation and Language
Helps AI check its thinking both ways.
A Survey of Process Reward Models: From Outcome Signals to Process Supervisions for Large Language Models
Computation and Language
Teaches computers to think step-by-step.
Towards Hierarchical Multi-Step Reward Models for Enhanced Reasoning in Large Language Models
Computation and Language
Teaches computers to think better, step-by-step.