Advancing Frontiers of Path Integral Theory for Stochastic Optimal Control
By: Apurva Patil
Potential Business Impact:
Lets robots learn and act in tricky situations.
Stochastic Optimal Control (SOC) problems arise in systems influenced by uncertainty, such as autonomous robots or financial models. Traditional methods like dynamic programming are often intractable for high-dimensional, nonlinear systems due to the curse of dimensionality. This dissertation explores the path integral control framework as a scalable, sampling-based alternative. By reformulating SOC problems as expectations over stochastic trajectories, it enables efficient policy synthesis via Monte Carlo sampling and supports real-time implementation through GPU parallelization. We apply this framework to six classes of SOC problems: Chance-Constrained SOC, Stochastic Differential Games, Deceptive Control, Task Hierarchical Control, Risk Mitigation of Stealthy Attacks, and Discrete-Time LQR. A sample complexity analysis for the discrete-time case is also provided. These contributions establish a foundation for simulator-driven autonomy in complex, uncertain environments.
Similar Papers
Approximate constrained stochastic optimal control via parameterized input inference
Systems and Control
Helps robots learn to move safely around obstacles.
Error Propagation in Dynamic Programming: From Stochastic Control to Option Pricing
Machine Learning (Stat)
Helps computers learn to make smart money choices.
Unifying Entropy Regularization in Optimal Control: From and Back to Classical Objectives via Iterated Soft Policies and Path Integral Solutions
Optimization and Control
Makes robots learn faster and smarter.