The Limits of Preference Data for Post-Training
By: Eric Zhao, Jessica Dai, Pranjal Awasthi
Potential Business Impact:
Makes AI better at tasks needing human judgment.
Recent progress in strengthening the capabilities of large language models has stemmed from applying reinforcement learning to domains with automatically verifiable outcomes. A key question is whether we can similarly use RL to optimize for outcomes in domains where evaluating outcomes inherently requires human feedback; for example, in tasks like deep research and trip planning, outcome evaluation is qualitative and there are many possible degrees of success. One attractive and scalable modality for collecting human feedback is preference data: ordinal rankings (pairwise or $k$-wise) that indicate, for $k$ given outcomes, which one is preferred. In this work, we study a critical roadblock: preference data fundamentally and significantly limits outcome-based optimization. Even with idealized preference data (infinite, noiseless, and online), the use of ordinal feedback can prevent obtaining even approximately optimal solutions. We formalize this impossibility using voting theory, drawing an analogy between how a model chooses to answer a query with how voters choose a candidate to elect. This indicates that grounded human scoring and algorithmic innovations are necessary for extending the success of RL post-training to domains demanding human feedback. We also explore why these limitations have disproportionately impacted RLHF when it comes to eliciting reasoning behaviors (e.g., backtracking) versus situations where RLHF has been historically successful (e.g., instruction-tuning and safety training), finding that the limitations of preference data primarily suppress RLHF's ability to elicit robust strategies -- a class that encompasses most reasoning behaviors.
Similar Papers
Policy-labeled Preference Learning: Is Preference Enough for RLHF?
Machine Learning (CS)
Teaches computers to learn better from people.
Beyond Ordinal Preferences: Why Alignment Needs Cardinal Human Feedback
Artificial Intelligence
Makes AI better by asking for more detailed feedback.
When Personalization Meets Reality: A Multi-Faceted Analysis of Personalized Preference Learning
Computation and Language
Helps AI learn different people's opinions fairly.