Fusing Rewards and Preferences in Reinforcement Learning
By: Sadegh Khorasani , Saber Salehkaleybar , Negar Kiyavash and more
Potential Business Impact:
Teaches robots better by learning from choices.
We present Dual-Feedback Actor (DFA), a reinforcement learning algorithm that fuses both individual rewards and pairwise preferences (if available) into a single update rule. DFA uses the policy's log-probabilities directly to model the preference probability, avoiding a separate reward-modeling step. Preferences can be provided by human-annotators (at state-level or trajectory-level) or be synthesized online from Q-values stored in an off-policy replay buffer. Under a Bradley-Terry model, we prove that minimizing DFA's preference loss recovers the entropy-regularized Soft Actor-Critic (SAC) policy. Our simulation results show that DFA trained on generated preferences matches or exceeds SAC on six control environments and demonstrates a more stable training process. With only a semi-synthetic preference dataset under Bradley-Terry model, our algorithm outperforms reward-modeling reinforcement learning from human feedback (RLHF) baselines in a stochastic GridWorld and approaches the performance of an oracle with true rewards.
Similar Papers
RLAF: Reinforcement Learning from Automaton Feedback
Machine Learning (CS)
Teaches computers to learn tasks with tricky rules.
Direct Preference Optimization with Unobserved Preference Heterogeneity: The Necessity of Ternary Preferences
Artificial Intelligence
Teaches AI to understand many different opinions.
Pref-GUIDE: Continual Policy Learning from Real-Time Human Feedback via Preference-Based Learning
Machine Learning (CS)
Teaches robots better by using people's opinions.