Policy Mirror Descent with Temporal Difference Learning: Sample Complexity under Online Markov Data
By: Wenye Li , Hongxu Chen , Jiacai Liu and more
This paper studies the policy mirror descent (PMD) method, which is a general policy optimization framework in reinforcement learning and can cover a wide range of policy gradient methods by specifying difference mirror maps. Existing sample complexity analysis for policy mirror descent either focuses on the generative sampling model, or the Markovian sampling model but with the action values being explicitly approximated to certain pre-specified accuracy. In contrast, we consider the sample complexity of policy mirror descent with temporal difference (TD) learning under the Markovian sampling model. Two algorithms called Expected TD-PMD and Approximate TD-PMD have been presented, which are off-policy and mixed policy algorithms respectively. Under a small enough constant policy update step size, the $\tilde{O}(\varepsilon^{-2})$ (a logarithm factor about $\varepsilon$ is hidden in $\tilde{O}(\cdot)$) sample complexity can be established for them to achieve average-time $\varepsilon$-optimality. The sample complexity is further improved to $O(\varepsilon^{-2})$ (without the hidden logarithm factor) to achieve the last-iterate $\varepsilon$-optimality based on adaptive policy update step sizes.
Similar Papers
On the Convergence of Policy Mirror Descent with Temporal Difference Evaluation
Optimization and Control
Teaches computers to learn better from experience.
One-Step Flow Policy Mirror Descent
Machine Learning (CS)
Makes robots learn and act much faster.
On the Effect of Regularization in Policy Mirror Descent
Machine Learning (CS)
Makes computer learning more stable and reliable.