Score: 0

Adversarial Training for Process Reward Models

Published: November 28, 2025 | arXiv ID: 2511.22888v1

By: Gurusha Juneja, Deepak Nathani, William Yang Wang

Potential Business Impact:

Teaches AI to find and fix its own mistakes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Process Reward Models (PRMs) enhance reasoning ability of LLMs by providing step-level supervision. However, their widespread adoption is limited due to expensive manual step-level annotation and poor generalization of static training data to novel errors. We introduce Adversarially Trained PRMs (\texttt{APRM}), where a Generator ($G$) learns to produce reasoning errors to deceive a PRM ($R$), while $R$ concurrently learns to detect them. This interaction yields progressively harder negatives for $R$, improving its robustness and generalization to novel errors without requiring manual step-level labels. Averaged across diverse mathematical reasoning benchmarks, \texttt{APRM} improves solver accuracy by $+3.4$ percentage points (pp) over the strongest PRM baseline. \texttt{APRM} achieves gains of $+5.3$ pp on out-of-distribution tasks.

Country of Origin
🇺🇸 United States

Page Count
25 pages

Category
Computer Science:
Machine Learning (CS)