Direct Preference Optimization with Unobserved Preference Heterogeneity: The Necessity of Ternary Preferences
By: Keertana Chidambaram, Karthik Vinary Seetharaman, Vasilis Syrgkanis
Potential Business Impact:
Teaches AI to understand many different opinions.
Reinforcement Learning from Human Feedback (RLHF) has become central to aligning large language models with human values, typically by first learning a reward model from preference data which is then used to update the model with reinforcement learning. Recent alternatives such as Direct Preference Optimization (DPO) simplify this pipeline by directly optimizing on preferences. However, both approaches often assume uniform annotator preferences and rely on binary comparisons, overlooking two key limitations: the diversity of human evaluators and the limitations of pairwise feedback. In this work, we address both these issues. First, we connect preference learning in RLHF with the econometrics literature and show that binary comparisons are insufficient for identifying latent user preferences from finite user data and infinite users, while (even incomplete) rankings over three or more responses ensure identifiability. Second, we introduce methods to incorporate heterogeneous preferences into alignment algorithms. We develop an Expectation-Maximization adaptation of DPO that discovers latent annotator types and trains a mixture of LLMs accordingly. Then we propose an aggregation algorithm using a min-max regret fairness criterion to produce a single generative policy with equitable performance guarantees. Together, these contributions establish a theoretical and algorithmic framework for fairness and personalization for diverse users in generative model alignment.
Similar Papers
Difficulty-Based Preference Data Selection by DPO Implicit Reward Gap
Computation and Language
Chooses smart examples to teach AI better.
Explicit Preference Optimization: No Need for an Implicit Reward Model
Machine Learning (CS)
Makes AI learn better without extra steps.
When Human Preferences Flip: An Instance-Dependent Robust Loss for RLHF
Artificial Intelligence
Fixes AI mistakes from bad human feedback.