Score: 1

Towards Reliable, Uncertainty-Aware Alignment

Published: July 21, 2025 | arXiv ID: 2507.15906v1

By: Debangshu Banerjee, Kintan Saha, Aditya Gopalan

Potential Business Impact:

Makes AI smarter and safer by fixing its mistakes.

Business Areas:
A/B Testing Data and Analytics

Alignment of large language models (LLMs) typically involves training a reward model on preference data, followed by policy optimization with respect to the reward model. However, optimizing policies with respect to a single reward model estimate can render it vulnerable to inaccuracies in the reward model. We empirically study the variability of reward model training on open-source benchmarks. We observe that independently trained reward models on the same preference dataset can exhibit substantial disagreement, highlighting the instability of current alignment strategies. Employing a theoretical model, we demonstrate that variability in reward model estimation can cause overfitting, leading to the risk of performance degradation. To mitigate this risk, we propose a variance-aware policy optimization framework for preference-based alignment. The key ingredient of the framework is a new policy regularizer that incorporates reward model variance estimates. We show that variance-aware policy optimization provably reduces the risk of outputting a worse policy than the default. Experiments across diverse LLM and reward model configurations confirm that our approach yields more stable and robust alignment than the standard (variance-unaware) pipeline.

Repos / Data Links

Page Count
35 pages

Category
Computer Science:
Machine Learning (CS)