GB-DQN: Gradient Boosted DQN Models for Non-stationary Reinforcement Learning
By: Chang-Hwan Lee, Chanseung Lee
Non-stationary environments pose a fundamental challenge for deep reinforcement learning, as changes in dynamics or rewards invalidate learned value functions and cause catastrophic forgetting. We propose \emph{Gradient-Boosted Deep Q-Networks (GB-DQN)}, an adaptive ensemble method that addresses model drift through incremental residual learning. Instead of retraining a single Q-network, GB-DQN constructs an additive ensemble in which each new learner is trained to approximate the Bellman residual of the current ensemble after drift. We provide theoretical results showing that each boosting step reduces the empirical Bellman residual and that the ensemble converges to the post-drift optimal value function under standard assumptions. Experiments across a diverse set of control tasks with controlled dynamics changes demonstrate faster recovery, improved stability, and greater robustness compared to DQN and common non-stationary baselines.
Similar Papers
Deep Transfer $Q$-Learning for Offline Non-Stationary Reinforcement Learning
Machine Learning (Stat)
Teaches computers to make better choices with less data.
Deep Q-Learning with Gradient Target Tracking
Machine Learning (CS)
Teaches computers to learn better, faster.
Solving Continuous Mean Field Games: Deep Reinforcement Learning for Non-Stationary Dynamics
Machine Learning (CS)
Teaches many robots to work together smartly.