Score: 0

Universal Approximation Theorem of Deep Q-Networks

Published: May 4, 2025 | arXiv ID: 2505.02288v1

By: Qian Qi

Potential Business Impact:

Makes AI learn better from continuous data.

Business Areas:
Quantum Computing Science and Engineering

We establish a continuous-time framework for analyzing Deep Q-Networks (DQNs) via stochastic control and Forward-Backward Stochastic Differential Equations (FBSDEs). Considering a continuous-time Markov Decision Process (MDP) driven by a square-integrable martingale, we analyze DQN approximation properties. We show that DQNs can approximate the optimal Q-function on compact sets with arbitrary accuracy and high probability, leveraging residual network approximation theorems and large deviation bounds for the state-action process. We then analyze the convergence of a general Q-learning algorithm for training DQNs in this setting, adapting stochastic approximation theorems. Our analysis emphasizes the interplay between DQN layer count, time discretization, and the role of viscosity solutions (primarily for the value function $V^*$) in addressing potential non-smoothness of the optimal Q-function. This work bridges deep reinforcement learning and stochastic control, offering insights into DQNs in continuous-time settings, relevant for applications with physical systems or high-frequency data.

Country of Origin
🇨🇳 China

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)