Score: 0

Continuous Policy and Value Iteration for Stochastic Control Problems and Its Convergence

Published: June 9, 2025 | arXiv ID: 2506.08121v1

By: Qi Feng, Gu Wang

Potential Business Impact:

Teaches computers to make best choices faster.

Business Areas:
Autonomous Vehicles Transportation

We introduce a continuous policy-value iteration algorithm where the approximations of the value function of a stochastic control problem and the optimal control are simultaneously updated through Langevin-type dynamics. This framework applies to both the entropy-regularized relaxed control problems and the classical control problems, with infinite horizon. We establish policy improvement and demonstrate convergence to the optimal control under the monotonicity condition of the Hamiltonian. By utilizing Langevin-type stochastic differential equations for continuous updates along the policy iteration direction, our approach enables the use of distribution sampling and non-convex learning techniques in machine learning to optimize the value function and identify the optimal control simultaneously.

Country of Origin
🇺🇸 United States

Page Count
37 pages

Category
Mathematics:
Optimization and Control