Greedy Sampling Is Provably Efficient for RLHF
By: Di Wu , Chengshuai Shi , Jing Yang and more
Potential Business Impact:
Teaches AI to learn better from what people like.
Reinforcement Learning from Human Feedback (RLHF) has emerged as a key technique for post-training large language models. Despite its empirical success, the theoretical understanding of RLHF is still limited, as learning the KL-regularized target with only preference feedback poses additional challenges compared with canonical RL. Existing works mostly study the reward-based Bradley-Terry (BT) preference model, and extend classical designs utilizing optimism or pessimism. This work, instead, considers the general preference model (whose practical relevance has been observed recently) and obtains performance guarantees with major, order-wise improvements over existing ones. Surprisingly, these results are derived from algorithms that directly use the empirical estimates (i.e., greedy sampling), as opposed to constructing optimistic or pessimistic estimates in previous works. This insight has a deep root in the unique structural property of the optimal policy class under the KL-regularized target, and we further specialize it to the BT model, highlighting the surprising sufficiency of greedy sampling in RLHF.
Similar Papers
Towards Efficient Online Exploration for Reinforcement Learning with Human Feedback
Machine Learning (Stat)
Teaches AI to learn what people like faster.
Robust Reinforcement Learning from Human Feedback for Large Language Models Fine-Tuning
Machine Learning (Stat)
Makes AI understand what people want better.
Maximizing the efficiency of human feedback in AI alignment: a comparative analysis
Human-Computer Interaction
Teaches AI to learn faster from people's choices.