Sample Efficient Experience Replay in Non-stationary Environments
By: Tianyang Duan , Zongyuan Zhang , Songxiao Guo and more
Potential Business Impact:
Teaches robots to learn faster when things change.
Reinforcement learning (RL) in non-stationary environments is challenging, as changing dynamics and rewards quickly make past experiences outdated. Traditional experience replay (ER) methods, especially those using TD-error prioritization, struggle to distinguish between changes caused by the agent's policy and those from the environment, resulting in inefficient learning under dynamic conditions. To address this challenge, we propose the Discrepancy of Environment Dynamics (DoE), a metric that isolates the effects of environment shifts on value functions. Building on this, we introduce Discrepancy of Environment Prioritized Experience Replay (DEER), an adaptive ER framework that prioritizes transitions based on both policy updates and environmental changes. DEER uses a binary classifier to detect environment changes and applies distinct prioritization strategies before and after each shift, enabling more sample-efficient learning. Experiments on four non-stationary benchmarks demonstrate that DEER further improves the performance of off-policy algorithms by 11.54 percent compared to the best-performing state-of-the-art ER methods.
Similar Papers
Efficient Adaptation of Reinforcement Learning Agents to Sudden Environmental Change
Machine Learning (CS)
Helps robots learn new tricks without forgetting old ones.
Improvements of Dark Experience Replay and Reservoir Sampling towards Better Balance between Consolidation and Plasticity
Machine Learning (CS)
Helps robots learn new things without forgetting old ones.
DQN Performance with Epsilon Greedy Policies and Prioritized Experience Replay
Machine Learning (CS)
Teaches computers to learn faster and better.