Multi-Task Reward Learning from Human Ratings
By: Mingkang Wu , Devin White , Evelyn Rose and more
Potential Business Impact:
Teaches computers to learn like people.
Reinforcement learning from human feedback (RLHF) has become a key factor in aligning model behavior with users' goals. However, while humans integrate multiple strategies when making decisions, current RLHF approaches often simplify this process by modeling human reasoning through isolated tasks such as classification or regression. In this paper, we propose a novel reinforcement learning (RL) method that mimics human decision-making by jointly considering multiple tasks. Specifically, we leverage human ratings in reward-free environments to infer a reward function, introducing learnable weights that balance the contributions of both classification and regression models. This design captures the inherent uncertainty in human decision-making and allows the model to adaptively emphasize different strategies. We conduct several experiments using synthetic human ratings to validate the effectiveness of the proposed approach. Results show that our method consistently outperforms existing rating-based RL methods, and in some cases, even surpasses traditional RL approaches.
Similar Papers
Reinforcement Learning from Human Feedback
Machine Learning (CS)
Teaches computers to follow human instructions better.
Robust Reinforcement Learning from Human Feedback for Large Language Models Fine-Tuning
Machine Learning (Stat)
Makes AI understand what people want better.
Contextual Online Uncertainty-Aware Preference Learning for Human Feedback
Machine Learning (Stat)
Teaches AI to learn what people like.