Score: 1

Ask a Strong LLM Judge when Your Reward Model is Uncertain

Published: October 23, 2025 | arXiv ID: 2510.20369v1

By: Zhenghao Xu , Qin Lu , Qingru Zhang and more

Potential Business Impact:

Lets AI learn better by using smart guessing.

Business Areas:
A/B Testing Data and Analytics

Reward model (RM) plays a pivotal role in reinforcement learning with human feedback (RLHF) for aligning large language models (LLMs). However, classical RMs trained on human preferences are vulnerable to reward hacking and generalize poorly to out-of-distribution (OOD) inputs. By contrast, strong LLM judges equipped with reasoning capabilities demonstrate superior generalization, even without additional training, but incur significantly higher inference costs, limiting their applicability in online RLHF. In this work, we propose an uncertainty-based routing framework that efficiently complements a fast RM with a strong but costly LLM judge. Our approach formulates advantage estimation in policy gradient (PG) methods as pairwise preference classification, enabling principled uncertainty quantification to guide routing. Uncertain pairs are forwarded to the LLM judge, while confident ones are evaluated by the RM. Experiments on RM benchmarks demonstrate that our uncertainty-based routing strategy significantly outperforms random judge calling at the same cost, and downstream alignment results showcase its effectiveness in improving online RLHF.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)