Score: 2

Beyond Ordinal Preferences: Why Alignment Needs Cardinal Human Feedback

Published: August 11, 2025 | arXiv ID: 2508.08486v1

By: Parker Whitfill, Stewy Slocum

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Makes AI better by asking for more detailed feedback.

Alignment techniques for LLMs rely on optimizing preference-based objectives -- where these preferences are typically elicited as ordinal, binary choices between responses. Recent work has focused on improving label quality or mitigating particular biases, but we identify a more fundamental limitation: these methods collect the wrong kind of data. We prove an impossibility result: no algorithm relying solely on ordinal comparisons can systematically recover the most preferred model. Intuitively, ordinal data lacks the information needed to resolve tradeoffs -- e.g., fixing a factual error on one prompt versus improving style on another. We show that selecting the optimal model requires recovering preferences over \emph{models} (rather than just responses), which can only be identified given cardinal feedback about response quality. To address this, we collect and publicly release a dataset of 25,000 cardinal judgments using willingness-to-pay elicitations, a well-established tool from experimental economics. Empirically, we find that incorporating cardinal feedback into preference fine-tuning allows models to prioritize high-impact improvements and outperform ordinal-only methods on downstream benchmarks, such as Arena-Hard.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
23 pages

Category
Computer Science:
Artificial Intelligence