Score: 0

Active Query Selection for Crowd-Based Reinforcement Learning

Published: August 26, 2025 | arXiv ID: 2508.19132v1

By: Jonathan Erskine, Taku Yamagata, Raúl Santos-Rodríguez

Potential Business Impact:

Teaches robots to learn faster from people.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Preference-based reinforcement learning has gained prominence as a strategy for training agents in environments where the reward signal is difficult to specify or misaligned with human intent. However, its effectiveness is often limited by the high cost and low availability of reliable human input, especially in domains where expert feedback is scarce or errors are costly. To address this, we propose a novel framework that combines two complementary strategies: probabilistic crowd modelling to handle noisy, multi-annotator feedback, and active learning to prioritize feedback on the most informative agent actions. We extend the Advise algorithm to support multiple trainers, estimate their reliability online, and incorporate entropy-based query selection to guide feedback requests. We evaluate our approach in a set of environments that span both synthetic and real-world-inspired settings, including 2D games (Taxi, Pacman, Frozen Lake) and a blood glucose control task for Type 1 Diabetes using the clinically approved UVA/Padova simulator. Our preliminary results demonstrate that agents trained with feedback on uncertain trajectories exhibit faster learning in most tasks, and we outperform the baselines for the blood glucose control task.

Country of Origin
🇬🇧 United Kingdom

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)