Score: 0

A Survey of Process Reward Models: From Outcome Signals to Process Supervisions for Large Language Models

Published: October 9, 2025 | arXiv ID: 2510.08049v1

By: Congming Zheng , Jiachen Zhu , Zhuoying Ou and more

Potential Business Impact:

Teaches computers to think step-by-step.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Although Large Language Models (LLMs) exhibit advanced reasoning ability, conventional alignment remains largely dominated by outcome reward models (ORMs) that judge only final answers. Process Reward Models(PRMs) address this gap by evaluating and guiding reasoning at the step or trajectory level. This survey provides a systematic overview of PRMs through the full loop: how to generate process data, build PRMs, and use PRMs for test-time scaling and reinforcement learning. We summarize applications across math, code, text, multimodal reasoning, robotics, and agents, and review emerging benchmarks. Our goal is to clarify design spaces, reveal open challenges, and guide future research toward fine-grained, robust reasoning alignment.

Country of Origin
🇨🇳 China

Page Count
14 pages

Category
Computer Science:
Computation and Language