Dynamic and Generalizable Process Reward Modeling
By: Zhangyue Yin , Qiushi Sun , Zhiyuan Zeng and more
Potential Business Impact:
Teaches AI to judge its own work better.
Process Reward Models (PRMs) are crucial for guiding Large Language Models (LLMs) in complex scenarios by providing dense reward signals. However, existing PRMs primarily rely on heuristic approaches, which struggle with cross-domain generalization. While LLM-as-judge has been proposed to provide generalized rewards, current research has focused mainly on feedback results, overlooking the meaningful guidance embedded within the text. Additionally, static and coarse-grained evaluation criteria struggle to adapt to complex process supervision. To tackle these challenges, we propose Dynamic and Generalizable Process Reward Modeling (DG-PRM), which features a reward tree to capture and store fine-grained, multi-dimensional reward criteria. DG-PRM dynamically selects reward signals for step-wise reward scoring. To handle multifaceted reward signals, we pioneeringly adopt Pareto dominance estimation to identify discriminative positive and negative pairs. Experimental results show that DG-PRM achieves stunning performance on prevailing benchmarks, significantly boosting model performance across tasks with dense rewards. Further analysis reveals that DG-PRM adapts well to out-of-distribution scenarios, demonstrating exceptional generalizability.
Similar Papers
A Survey of Process Reward Models: From Outcome Signals to Process Supervisions for Large Language Models
Computation and Language
Teaches computers to think step-by-step.
GM-PRM: A Generative Multimodal Process Reward Model for Multimodal Mathematical Reasoning
Computation and Language
Fixes math problems by explaining each step.
GM-PRM: A Generative Multimodal Process Reward Model for Multimodal Mathematical Reasoning
Computation and Language
Fixes math problems by explaining each step.