ResponseRank: Data-Efficient Reward Modeling through Preference Strength Learning
By: Timo Kaufmann , Yannick Metz , Daniel Keim and more
Binary choices, as often used for reinforcement learning from human feedback (RLHF), convey only the direction of a preference. A person may choose apples over oranges and bananas over grapes, but which preference is stronger? Strength is crucial for decision-making under uncertainty and generalization of preference models, but hard to measure reliably. Metadata such as response times and inter-annotator agreement can serve as proxies for strength, but are often noisy and confounded. We propose ResponseRank to address the challenge of learning from noisy strength signals. Our method uses relative differences in proxy signals to rank responses to pairwise comparisons by their inferred preference strength. To control for systemic variation, we compare signals only locally within carefully constructed strata. This enables robust learning of utility differences consistent with strength-derived rankings while making minimal assumptions about the strength signal. Our contributions are threefold: (1) ResponseRank, a novel method that robustly learns preference strength by leveraging locally valid relative strength signals; (2) empirical evidence of improved sample efficiency and robustness across diverse tasks: synthetic preference learning (with simulated response times), language modeling (with annotator agreement), and RL control tasks (with simulated episode returns); and (3) the Pearson Distance Correlation (PDC), a novel metric that isolates cardinal utility learning from ordinal accuracy.
Similar Papers
The Limits of Preference Data for Post-Training
Machine Learning (CS)
Makes AI better at tasks needing human judgment.
RewardRank: Optimizing True Learning-to-Rank Utility
Information Retrieval
Shows online stores what shoppers really want.
Model inference for ranking from pairwise comparisons
Social and Information Networks
Figures out who's best from messy game results.