Towards Reliable, Uncertainty-Aware Alignment
By: Debangshu Banerjee, Kintan Saha, Aditya Gopalan
Potential Business Impact:
Makes AI smarter and safer by fixing its mistakes.
Alignment of large language models (LLMs) typically involves training a reward model on preference data, followed by policy optimization with respect to the reward model. However, optimizing policies with respect to a single reward model estimate can render it vulnerable to inaccuracies in the reward model. We empirically study the variability of reward model training on open-source benchmarks. We observe that independently trained reward models on the same preference dataset can exhibit substantial disagreement, highlighting the instability of current alignment strategies. Employing a theoretical model, we demonstrate that variability in reward model estimation can cause overfitting, leading to the risk of performance degradation. To mitigate this risk, we propose a variance-aware policy optimization framework for preference-based alignment. The key ingredient of the framework is a new policy regularizer that incorporates reward model variance estimates. We show that variance-aware policy optimization provably reduces the risk of outputting a worse policy than the default. Experiments across diverse LLM and reward model configurations confirm that our approach yields more stable and robust alignment than the standard (variance-unaware) pipeline.
Similar Papers
Larger or Smaller Reward Margins to Select Preferences for Alignment?
Machine Learning (CS)
Helps AI learn what humans like better.
Two Minds Better Than One: Collaborative Reward Modeling for LLM Alignment
Machine Learning (CS)
Cleans AI's learning data for better results.
Preference Learning for AI Alignment: a Causal Perspective
Artificial Intelligence
Makes AI understand what people truly want.