Continuous Policy and Value Iteration for Stochastic Control Problems and Its Convergence
By: Qi Feng, Gu Wang
Potential Business Impact:
Teaches computers to make best choices faster.
We introduce a continuous policy-value iteration algorithm where the approximations of the value function of a stochastic control problem and the optimal control are simultaneously updated through Langevin-type dynamics. This framework applies to both the entropy-regularized relaxed control problems and the classical control problems, with infinite horizon. We establish policy improvement and demonstrate convergence to the optimal control under the monotonicity condition of the Hamiltonian. By utilizing Langevin-type stochastic differential equations for continuous updates along the policy iteration direction, our approach enables the use of distribution sampling and non-convex learning techniques in machine learning to optimize the value function and identify the optimal control simultaneously.
Similar Papers
Deep Learning for Continuous-time Stochastic Control with Jumps
Machine Learning (CS)
Teaches computers to make smart choices in risky situations.
Deep Learning for Continuous-time Stochastic Control with Jumps
Machine Learning (CS)
Teaches computers to make smart choices automatically.
Accuracy of Discretely Sampled Stochastic Policies in Continuous-time Reinforcement Learning
Machine Learning (CS)
Makes robots learn better by trying random actions.