Beyond Ordinal Preferences: Why Alignment Needs Cardinal Human Feedback
By: Parker Whitfill, Stewy Slocum
Potential Business Impact:
Makes AI better by asking for more detailed feedback.
Alignment techniques for LLMs rely on optimizing preference-based objectives -- where these preferences are typically elicited as ordinal, binary choices between responses. Recent work has focused on improving label quality or mitigating particular biases, but we identify a more fundamental limitation: these methods collect the wrong kind of data. We prove an impossibility result: no algorithm relying solely on ordinal comparisons can systematically recover the most preferred model. Intuitively, ordinal data lacks the information needed to resolve tradeoffs -- e.g., fixing a factual error on one prompt versus improving style on another. We show that selecting the optimal model requires recovering preferences over \emph{models} (rather than just responses), which can only be identified given cardinal feedback about response quality. To address this, we collect and publicly release a dataset of 25,000 cardinal judgments using willingness-to-pay elicitations, a well-established tool from experimental economics. Empirically, we find that incorporating cardinal feedback into preference fine-tuning allows models to prioritize high-impact improvements and outperform ordinal-only methods on downstream benchmarks, such as Arena-Hard.
Similar Papers
The Limits of Preference Data for Post-Training
Machine Learning (CS)
Makes AI better at tasks needing human judgment.
Preference Learning for AI Alignment: a Causal Perspective
Artificial Intelligence
Makes AI understand what people truly want.
The Reward Model Selection Crisis in Personalized Alignment
Artificial Intelligence
Helps AI learn what you really want.