Evaluating Feature Dependent Noise in Preference-based Reinforcement Learning
By: Yuxuan Li, Harshith Reddy Kethireddy, Srijita Das
Potential Business Impact:
Teaches robots better, even with human mistakes.
Learning from Preferences in Reinforcement Learning (PbRL) has gained attention recently, as it serves as a natural fit for complicated tasks where the reward function is not easily available. However, preferences often come with uncertainty and noise if they are not from perfect teachers. Much prior literature aimed to detect noise, but with limited types of noise and most being uniformly distributed with no connection to observations. In this work, we formalize the notion of targeted feature-dependent noise and propose several variants like trajectory feature noise, trajectory similarity noise, uncertainty-aware noise, and Language Model noise. We evaluate feature-dependent noise, where noise is correlated with certain features in complex continuous control tasks from DMControl and Meta-world. Our experiments show that in some feature-dependent noise settings, the state-of-the-art noise-robust PbRL method's learning performance is significantly deteriorated, while PbRL method with no explicit denoising can surprisingly outperform noise-robust PbRL in majority settings. We also find language model's noise exhibits similar characteristics to feature-dependent noise, thereby simulating realistic humans and call for further study in learning with feature-dependent noise robustly.
Similar Papers
Best Policy Learning from Trajectory Preference Feedback
Machine Learning (CS)
Teaches AI to learn better from people's choices.
Efficient Personalization of Generative Models via Optimal Experimental Design
Machine Learning (CS)
Teaches AI to learn what you like faster.
A Multi-Component Reward Function with Policy Gradient for Automated Feature Selection with Dynamic Regularization and Bias Mitigation
Machine Learning (CS)
Makes AI fair by choosing the right information.