Accuracy of Discretely Sampled Stochastic Policies in Continuous-time Reinforcement Learning
By: Yanwei Jia, Du Ouyang, Yufei Zhang
Potential Business Impact:
Makes robots learn better by trying random actions.
Stochastic policies are widely used in continuous-time reinforcement learning algorithms. However, executing a stochastic policy and evaluating its performance in a continuous-time environment remain open challenges. This work introduces and rigorously analyzes a policy execution framework that samples actions from a stochastic policy at discrete time points and implements them as piecewise constant controls. We prove that as the sampling mesh size tends to zero, the controlled state process converges weakly to the dynamics with coefficients aggregated according to the stochastic policy. We explicitly quantify the convergence rate based on the regularity of the coefficients and establish an optimal first-order convergence rate for sufficiently regular coefficients. Additionally, we show that the same convergence rates hold with high probability concerning the sampling noise, and further establish a $1/2$-order almost sure convergence when the volatility is not controlled. Building on these results, we analyze the bias and variance of various policy evaluation and policy gradient estimators based on discrete-time observations. Our results provide theoretical justification for the exploratory stochastic control framework in [H. Wang, T. Zariphopoulou, and X.Y. Zhou, J. Mach. Learn. Res., 21 (2020), pp. 1-34].
Similar Papers
Continuous Policy and Value Iteration for Stochastic Control Problems and Its Convergence
Optimization and Control
Teaches computers to make best choices faster.
Agile Temporal Discretization for Symbolic Optimal Control
Systems and Control
Makes robots learn faster with flexible timing.
Reinforcement Learning with Random Time Horizons
Machine Learning (CS)
Teaches computers to learn from tasks that can end anytime.