Pref-GUIDE: Continual Policy Learning from Real-Time Human Feedback via Preference-Based Learning
By: Zhengran Ji, Boyuan Chen
Potential Business Impact:
Teaches robots better by using people's opinions.
Training reinforcement learning agents with human feedback is crucial when task objectives are difficult to specify through dense reward functions. While prior methods rely on offline trajectory comparisons to elicit human preferences, such data is unavailable in online learning scenarios where agents must adapt on the fly. Recent approaches address this by collecting real-time scalar feedback to guide agent behavior and train reward models for continued learning after human feedback becomes unavailable. However, scalar feedback is often noisy and inconsistent, limiting the accuracy and generalization of learned rewards. We propose Pref-GUIDE, a framework that transforms real-time scalar feedback into preference-based data to improve reward model learning for continual policy training. Pref-GUIDE Individual mitigates temporal inconsistency by comparing agent behaviors within short windows and filtering ambiguous feedback. Pref-GUIDE Voting further enhances robustness by aggregating reward models across a population of users to form consensus preferences. Across three challenging environments, Pref-GUIDE significantly outperforms scalar-feedback baselines, with the voting variant exceeding even expert-designed dense rewards. By reframing scalar feedback as structured preferences with population feedback, Pref-GUIDE offers a scalable and principled approach for harnessing human input in online reinforcement learning.
Similar Papers
Efficient Reinforcement Learning from Human Feedback via Bayesian Preference Inference
Machine Learning (CS)
Teaches computers faster by asking them what they like.
Policy-labeled Preference Learning: Is Preference Enough for RLHF?
Machine Learning (CS)
Teaches computers to learn better from people.
Active Query Selection for Crowd-Based Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn faster from people.