Policy-labeled Preference Learning: Is Preference Enough for RLHF?
By: Taehyun Cho , Seokhun Ju , Seungyub Han and more
Potential Business Impact:
Teaches computers to learn better from people.
To design rewards that align with human goals, Reinforcement Learning from Human Feedback (RLHF) has emerged as a prominent technique for learning reward functions from human preferences and optimizing policies via reinforcement learning algorithms. However, existing RLHF methods often misinterpret trajectories as being generated by an optimal policy, causing inaccurate likelihood estimation and suboptimal learning. Inspired by Direct Preference Optimization framework which directly learns optimal policy without explicit reward, we propose policy-labeled preference learning (PPL), to resolve likelihood mismatch issues by modeling human preferences with regret, which reflects behavior policy information. We also provide a contrastive KL regularization, derived from regret-based principles, to enhance RLHF in sequential decision making. Experiments in high-dimensional continuous control tasks demonstrate PPL's significant improvements in offline RLHF performance and its effectiveness in online settings.
Similar Papers
Best Policy Learning from Trajectory Preference Feedback
Machine Learning (CS)
Teaches AI to learn better from people's choices.
Towards Efficient Online Exploration for Reinforcement Learning with Human Feedback
Machine Learning (Stat)
Teaches AI to learn what people like faster.
Efficient Reinforcement Learning from Human Feedback via Bayesian Preference Inference
Machine Learning (CS)
Teaches computers faster by asking them what they like.