Score: 1

Dynamic and Generalizable Process Reward Modeling

Published: July 23, 2025 | arXiv ID: 2507.17849v1

By: Zhangyue Yin , Qiushi Sun , Zhiyuan Zeng and more

Potential Business Impact:

Teaches AI to judge its own work better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Process Reward Models (PRMs) are crucial for guiding Large Language Models (LLMs) in complex scenarios by providing dense reward signals. However, existing PRMs primarily rely on heuristic approaches, which struggle with cross-domain generalization. While LLM-as-judge has been proposed to provide generalized rewards, current research has focused mainly on feedback results, overlooking the meaningful guidance embedded within the text. Additionally, static and coarse-grained evaluation criteria struggle to adapt to complex process supervision. To tackle these challenges, we propose Dynamic and Generalizable Process Reward Modeling (DG-PRM), which features a reward tree to capture and store fine-grained, multi-dimensional reward criteria. DG-PRM dynamically selects reward signals for step-wise reward scoring. To handle multifaceted reward signals, we pioneeringly adopt Pareto dominance estimation to identify discriminative positive and negative pairs. Experimental results show that DG-PRM achieves stunning performance on prevailing benchmarks, significantly boosting model performance across tasks with dense rewards. Further analysis reveals that DG-PRM adapts well to out-of-distribution scenarios, demonstrating exceptional generalizability.

Country of Origin
🇨🇳 🇭🇰 Hong Kong, China

Page Count
32 pages

Category
Computer Science:
Computation and Language