Adaptive Replay Buffer for Offline-to-Online Reinforcement Learning
By: Chihyeon Song, Jaewoo Lee, Jinkyoo Park
Potential Business Impact:
Teaches robots to learn better from mistakes.
Offline-to-Online Reinforcement Learning (O2O RL) faces a critical dilemma in balancing the use of a fixed offline dataset with newly collected online experiences. Standard methods, often relying on a fixed data-mixing ratio, struggle to manage the trade-off between early learning stability and asymptotic performance. To overcome this, we introduce the Adaptive Replay Buffer (ARB), a novel approach that dynamically prioritizes data sampling based on a lightweight metric we call 'on-policyness'. Unlike prior methods that rely on complex learning procedures or fixed ratios, ARB is designed to be learning-free and simple to implement, seamlessly integrating into existing O2O RL algorithms. It assesses how closely collected trajectories align with the current policy's behavior and assigns a proportional sampling weight to each transition within that trajectory. This strategy effectively leverages offline data for initial stability while progressively focusing learning on the most relevant, high-rewarding online experiences. Our extensive experiments on D4RL benchmarks demonstrate that ARB consistently mitigates early performance degradation and significantly improves the final performance of various O2O RL algorithms, highlighting the importance of an adaptive, behavior-aware replay buffer design.
Similar Papers
Behavior-Adaptive Q-Learning: A Unifying Framework for Offline-to-Online RL
Machine Learning (CS)
Helps robots learn safely from past mistakes.
Adversarial Policy Optimization for Offline Preference-based Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn from examples.
Taming OOD Actions for Offline Reinforcement Learning: An Advantage-Based Approach
Machine Learning (CS)
Helps robots learn better from past mistakes.