Maximizing the efficiency of human feedback in AI alignment: a comparative analysis
By: Andreas Chouliaras, Dimitris Chatzopoulos
Potential Business Impact:
Teaches AI to learn faster from people's choices.
Reinforcement Learning from Human Feedback (RLHF) relies on preference modeling to align machine learning systems with human values, yet the popular approach of random pair sampling with Bradley-Terry modeling is statistically limited and inefficient under constrained annotation budgets. In this work, we explore alternative sampling and evaluation strategies for preference inference in RLHF, drawing inspiration from areas such as game theory, statistics, and social choice theory. Our best-performing method, Swiss InfoGain, employs a Swiss tournament system with a proxy mutual-information-gain pairing rule, which significantly outperforms all other methods in constrained annotation budgets while also being more sample-efficient. Even in high-resource settings, we can identify superior alternatives to the Bradley-Terry baseline. Our experiments demonstrate that adaptive, resource-aware strategies reduce redundancy, enhance robustness, and yield statistically significant improvements in preference learning, highlighting the importance of balancing alignment quality with human workload in RLHF pipelines.
Similar Papers
Maximizing the efficiency of human feedback in AI alignment: a comparative analysis
Human-Computer Interaction
Teaches computers to learn what people like faster.
Robust Reinforcement Learning from Human Feedback for Large Language Models Fine-Tuning
Machine Learning (Stat)
Makes AI understand what people want better.
Contextual Online Uncertainty-Aware Preference Learning for Human Feedback
Machine Learning (Stat)
Teaches AI to learn what people like.