Alignment as Distribution Learning: Your Preference Model is Explicitly a Language Model
By: Jihun Yun , Juno Kim , Jongho Park and more
Potential Business Impact:
Makes AI better at following instructions.
Alignment via reinforcement learning from human feedback (RLHF) has become the dominant paradigm for controlling the quality of outputs from large language models (LLMs). However, when viewed as `loss + regularization,' the standard RLHF objective lacks theoretical justification and incentivizes degenerate, deterministic solutions, an issue that variants such as Direct Policy Optimization (DPO) also inherit. In this paper, we rethink alignment by framing it as \emph{distribution learning} from pairwise preference feedback by explicitly modeling how information about the target language model bleeds through the preference data. This explicit modeling leads us to propose three principled learning objectives: preference maximum likelihood estimation, preference distillation, and reverse KL minimization. We theoretically show that all three approaches enjoy strong non-asymptotic $O(1/n)$ convergence to the target language model, naturally avoiding degeneracy and reward overfitting. Finally, we empirically demonstrate that our distribution learning framework, especially preference distillation, consistently outperforms or matches the performances of RLHF and DPO across various tasks and models.
Similar Papers
A Stable and Principled Loss Function for Direct Language Model Alignment
Machine Learning (CS)
Makes AI understand what you want better.
Direct Preference Optimization with Unobserved Preference Heterogeneity: The Necessity of Ternary Preferences
Artificial Intelligence
Teaches AI to understand many different opinions.
Aligning Large Vision-Language Models by Deep Reinforcement Learning and Direct Preference Optimization
Machine Learning (CS)
Teaches AI to understand pictures and words better.