Stochastic Control Methods for Optimization
By: Jinniao Qiu
Potential Business Impact:
Solves hard math problems using smart guessing.
In this work, we investigate a stochastic control framework for global optimization over both finite-dimensional Euclidean spaces and the Wasserstein space of probability measures. In the Euclidean setting, the original minimization problem is approximated by a family of regularized stochastic control problems; using dynamic programming, we analyze the associated Hamilton--Jacobi--Bellman equations and obtain tractable representations via the Cole--Hopf transform and the Feynman--Kac formula. For optimization over probability measures, we formulate a regularized mean-field control problem characterized by a master equation, and further approximate it by controlled $N$-particle systems. We establish that, as the regularization parameter tends to zero (and as the particle number tends to infinity for the optimization over probability measures), the value of the control problem converges to the global minimum of the original objective. Building on the resulting probabilistic representations, Monte Carlo-based numerical schemes are proposed and numerical experiments are reported to illustrate the practical performance of the methods and to support the theoretical convergence rates.
Similar Papers
Mean-Field Generalisation Bounds for Learning Controls in Stochastic Environments
Optimization and Control
Teaches computers to make smart choices from data.
Stochastic Optimal Control via Measure Relaxations
Machine Learning (CS)
Makes smart decisions faster for tricky problems.
Towards optimal control of ensembles of discrete-time systems
Optimization and Control
Helps control many things at once.