Accelerated Distributional Temporal Difference Learning with Linear Function Approximation
By: Kaicheng Jin , Yang Peng , Jiansheng Yang and more
Potential Business Impact:
Learns how good choices are faster with less data.
In this paper, we study the finite-sample statistical rates of distributional temporal difference (TD) learning with linear function approximation. The purpose of distributional TD learning is to estimate the return distribution of a discounted Markov decision process for a given policy. Previous works on statistical analysis of distributional TD learning focus mainly on the tabular case. We first consider the linear function approximation setting and conduct a fine-grained analysis of the linear-categorical Bellman equation. Building on this analysis, we further incorporate variance reduction techniques in our new algorithms to establish tight sample complexity bounds independent of the support size $K$ when $K$ is large. Our theoretical results imply that, when employing distributional TD learning with linear function approximation, learning the full distribution of the return function from streaming data is no more difficult than learning its expectation. This work provide new insights into the statistical efficiency of distributional reinforcement learning algorithms.
Similar Papers
Reinforcement Learning From State and Temporal Differences
Machine Learning (CS)
Teaches computers to make better decisions.
A Finite-Time Analysis of TD Learning with Linear Function Approximation without Projections or Strong Convexity
Machine Learning (CS)
Teaches computers to learn without needing extra checks.
First-order Sobolev Reinforcement Learning
Machine Learning (CS)
Teaches computers to learn faster and more reliably.