Score: 0

Deep Q-Learning with Gradient Target Tracking

Published: March 20, 2025 | arXiv ID: 2503.16700v3

By: Bum Geun Park, Taeho Lee, Donghwan Lee

Potential Business Impact:

Teaches computers to learn better, faster.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

This paper introduces Q-learning with gradient target tracking, a novel reinforcement learning framework that provides a learned continuous target update mechanism as an alternative to the conventional hard update paradigm. In the standard deep Q-network (DQN), the target network is a copy of the online network's weights, held fixed for a number of iterations before being periodically replaced via a hard update. While this stabilizes training by providing consistent targets, it introduces a new challenge: the hard update period must be carefully tuned to achieve optimal performance. To address this issue, we propose two gradient-based target update methods: DQN with asymmetric gradient target tracking (AGT2-DQN) and DQN with symmetric gradient target tracking (SGT2-DQN). These methods replace the conventional hard target updates with continuous and structured updates using gradient descent, which effectively eliminates the need for manual tuning. We provide a theoretical analysis proving the convergence of these methods in tabular settings. Additionally, empirical evaluations demonstrate their advantages over standard DQN baselines, which suggest that gradient-based target updates can serve as an effective alternative to conventional target update mechanisms in Q-learning.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
55 pages

Category
Computer Science:
Machine Learning (CS)