Score: 0

Policy-labeled Preference Learning: Is Preference Enough for RLHF?

Published: May 6, 2025 | arXiv ID: 2505.06273v2

By: Taehyun Cho , Seokhun Ju , Seungyub Han and more

Potential Business Impact:

Teaches computers to learn better from people.

Business Areas:
Personalization Commerce and Shopping

To design rewards that align with human goals, Reinforcement Learning from Human Feedback (RLHF) has emerged as a prominent technique for learning reward functions from human preferences and optimizing policies via reinforcement learning algorithms. However, existing RLHF methods often misinterpret trajectories as being generated by an optimal policy, causing inaccurate likelihood estimation and suboptimal learning. Inspired by Direct Preference Optimization framework which directly learns optimal policy without explicit reward, we propose policy-labeled preference learning (PPL), to resolve likelihood mismatch issues by modeling human preferences with regret, which reflects behavior policy information. We also provide a contrastive KL regularization, derived from regret-based principles, to enhance RLHF in sequential decision making. Experiments in high-dimensional continuous control tasks demonstrate PPL's significant improvements in offline RLHF performance and its effectiveness in online settings.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)