Deep Q-Learning with Gradient Target Tracking
By: Bum Geun Park, Taeho Lee, Donghwan Lee
Potential Business Impact:
Teaches computers to learn better, faster.
This paper introduces Q-learning with gradient target tracking, a novel reinforcement learning framework that provides a learned continuous target update mechanism as an alternative to the conventional hard update paradigm. In the standard deep Q-network (DQN), the target network is a copy of the online network's weights, held fixed for a number of iterations before being periodically replaced via a hard update. While this stabilizes training by providing consistent targets, it introduces a new challenge: the hard update period must be carefully tuned to achieve optimal performance. To address this issue, we propose two gradient-based target update methods: DQN with asymmetric gradient target tracking (AGT2-DQN) and DQN with symmetric gradient target tracking (SGT2-DQN). These methods replace the conventional hard target updates with continuous and structured updates using gradient descent, which effectively eliminates the need for manual tuning. We provide a theoretical analysis proving the convergence of these methods in tabular settings. Additionally, empirical evaluations demonstrate their advantages over standard DQN baselines, which suggest that gradient-based target updates can serve as an effective alternative to conventional target update mechanisms in Q-learning.
Similar Papers
Deep Reinforcement Learning with Gradient Eligibility Traces
Machine Learning (CS)
Teaches robots to learn tasks faster.
Application of linear regression and quasi-Newton methods to the deep reinforcement learning in continuous action cases
Machine Learning (CS)
Teaches robots to move smoothly and learn.
Hierarchical Policy-Gradient Reinforcement Learning for Multi-Agent Shepherding Control of Non-Cohesive Targets
Machine Learning (CS)
Guides many robots to herd moving things.