Adversarial Training for Process Reward Models
By: Gurusha Juneja, Deepak Nathani, William Yang Wang
Potential Business Impact:
Teaches AI to find and fix its own mistakes.
Process Reward Models (PRMs) enhance reasoning ability of LLMs by providing step-level supervision. However, their widespread adoption is limited due to expensive manual step-level annotation and poor generalization of static training data to novel errors. We introduce Adversarially Trained PRMs (\texttt{APRM}), where a Generator ($G$) learns to produce reasoning errors to deceive a PRM ($R$), while $R$ concurrently learns to detect them. This interaction yields progressively harder negatives for $R$, improving its robustness and generalization to novel errors without requiring manual step-level labels. Averaged across diverse mathematical reasoning benchmarks, \texttt{APRM} improves solver accuracy by $+3.4$ percentage points (pp) over the strongest PRM baseline. \texttt{APRM} achieves gains of $+5.3$ pp on out-of-distribution tasks.
Similar Papers
Efficient Process Reward Model Training via Active Learning
Machine Learning (CS)
Teaches computers to learn faster with less work.
AgentPRM: Process Reward Models for LLM Agents via Step-Wise Promise and Progress
Computation and Language
Helps AI make better choices step-by-step.
VRPRM: Process Reward Modeling via Visual Reasoning
Machine Learning (CS)
Teaches computers to think better with less data.