Score: 1

Sample Efficient Experience Replay in Non-stationary Environments

Published: September 18, 2025 | arXiv ID: 2509.15032v1

By: Tianyang Duan , Zongyuan Zhang , Songxiao Guo and more

Potential Business Impact:

Teaches robots to learn faster when things change.

Business Areas:
A/B Testing Data and Analytics

Reinforcement learning (RL) in non-stationary environments is challenging, as changing dynamics and rewards quickly make past experiences outdated. Traditional experience replay (ER) methods, especially those using TD-error prioritization, struggle to distinguish between changes caused by the agent's policy and those from the environment, resulting in inefficient learning under dynamic conditions. To address this challenge, we propose the Discrepancy of Environment Dynamics (DoE), a metric that isolates the effects of environment shifts on value functions. Building on this, we introduce Discrepancy of Environment Prioritized Experience Replay (DEER), an adaptive ER framework that prioritizes transitions based on both policy updates and environmental changes. DEER uses a binary classifier to detect environment changes and applies distinct prioritization strategies before and after each shift, enabling more sample-efficient learning. Experiments on four non-stationary benchmarks demonstrate that DEER further improves the performance of off-policy algorithms by 11.54 percent compared to the best-performing state-of-the-art ER methods.

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)