Ask a Strong LLM Judge when Your Reward Model is Uncertain
By: Zhenghao Xu , Qin Lu , Qingru Zhang and more
Potential Business Impact:
Lets AI learn better by using smart guessing.
Reward model (RM) plays a pivotal role in reinforcement learning with human feedback (RLHF) for aligning large language models (LLMs). However, classical RMs trained on human preferences are vulnerable to reward hacking and generalize poorly to out-of-distribution (OOD) inputs. By contrast, strong LLM judges equipped with reasoning capabilities demonstrate superior generalization, even without additional training, but incur significantly higher inference costs, limiting their applicability in online RLHF. In this work, we propose an uncertainty-based routing framework that efficiently complements a fast RM with a strong but costly LLM judge. Our approach formulates advantage estimation in policy gradient (PG) methods as pairwise preference classification, enabling principled uncertainty quantification to guide routing. Uncertain pairs are forwarded to the LLM judge, while confident ones are evaluated by the RM. Experiments on RM benchmarks demonstrate that our uncertainty-based routing strategy significantly outperforms random judge calling at the same cost, and downstream alignment results showcase its effectiveness in improving online RLHF.
Similar Papers
Uncertainty Quantification for Large Language Model Reward Learning under Heterogeneous Human Feedback
Machine Learning (Stat)
Makes AI understand what people like better.
Approximating Human Preferences Using a Multi-Judge Learned System
Artificial Intelligence
Makes AI understand what people truly want.
Reward Model Routing in Alignment
Artificial Intelligence
Helps AI learn better by using many "teachers."