Active Query Selection for Crowd-Based Reinforcement Learning
By: Jonathan Erskine, Taku Yamagata, Raúl Santos-Rodríguez
Potential Business Impact:
Teaches robots to learn faster from people.
Preference-based reinforcement learning has gained prominence as a strategy for training agents in environments where the reward signal is difficult to specify or misaligned with human intent. However, its effectiveness is often limited by the high cost and low availability of reliable human input, especially in domains where expert feedback is scarce or errors are costly. To address this, we propose a novel framework that combines two complementary strategies: probabilistic crowd modelling to handle noisy, multi-annotator feedback, and active learning to prioritize feedback on the most informative agent actions. We extend the Advise algorithm to support multiple trainers, estimate their reliability online, and incorporate entropy-based query selection to guide feedback requests. We evaluate our approach in a set of environments that span both synthetic and real-world-inspired settings, including 2D games (Taxi, Pacman, Frozen Lake) and a blood glucose control task for Type 1 Diabetes using the clinically approved UVA/Padova simulator. Our preliminary results demonstrate that agents trained with feedback on uncertain trajectories exhibit faster learning in most tasks, and we outperform the baselines for the blood glucose control task.
Similar Papers
Efficient Reinforcement Learning from Human Feedback via Bayesian Preference Inference
Machine Learning (CS)
Teaches computers faster by asking them what they like.
Offline Clustering of Preference Learning with Active-data Augmentation
Machine Learning (CS)
Learns what people like from limited past choices.
Offline Clustering of Preference Learning with Active-data Augmentation
Machine Learning (CS)
Learns what people like from their choices.