Uncertainty Quantification for Large Language Model Reward Learning under Heterogeneous Human Feedback
By: Pangpang Liu, Junwei Lu, Will Wei Sun
Potential Business Impact:
Makes AI understand what people like better.
We study estimation and statistical inference for reward models used in aligning large language models (LLMs). A key component of LLM alignment is reinforcement learning from human feedback (RLHF), where humans compare pairs of model-generated answers and their preferences are used to train a reward model. However, human feedback is inherently heterogeneous, creating significant challenges for reliable reward learning. To address this, we adopt a heterogeneous preference framework that jointly models the latent reward of answers and human rationality. This leads to a challenging biconvex optimization problem, which we solve via an alternating gradient descent algorithm. We establish theoretical guarantees for the resulting estimator, including its convergence and asymptotic distribution. These results enable the construction of confidence intervals for reward estimates. Leveraging these uncertainty quantification results, we conduct valid statistical comparisons between rewards and incorporate uncertainty into the best-of-$N$ (BoN) policy framework. Extensive simulations demonstrate the effectiveness of our method, and applications to real LLM data highlight the practical value of accounting for uncertainty in reward modeling for LLM alignment.
Similar Papers
Robust Reinforcement Learning from Human Feedback for Large Language Models Fine-Tuning
Machine Learning (Stat)
Makes AI understand what people want better.
Ask a Strong LLM Judge when Your Reward Model is Uncertain
Machine Learning (CS)
Lets AI learn better by using smart guessing.
Contextual Online Uncertainty-Aware Preference Learning for Human Feedback
Machine Learning (Stat)
Teaches AI to learn what people like.