Solving nonconvex Hamilton--Jacobi--Isaacs equations with PINN-based policy iteration
By: Hee Jun Yang, Minjung Gim, Yeoneung Kim
Potential Business Impact:
Helps robots plan paths around moving obstacles.
We propose a mesh-free policy iteration framework that combines classical dynamic programming with physics-informed neural networks (PINNs) to solve high-dimensional, nonconvex Hamilton--Jacobi--Isaacs (HJI) equations arising in stochastic differential games and robust control. The method alternates between solving linear second-order PDEs under fixed feedback policies and updating the controls via pointwise minimax optimization using automatic differentiation. Under standard Lipschitz and uniform ellipticity assumptions, we prove that the value function iterates converge locally uniformly to the unique viscosity solution of the HJI equation. The analysis establishes equi-Lipschitz regularity of the iterates, enabling provable stability and convergence without requiring convexity of the Hamiltonian. Numerical experiments demonstrate the accuracy and scalability of the method. In a two-dimensional stochastic path-planning game with a moving obstacle, our method matches finite-difference benchmarks with relative $L^2$-errors below %10^{-2}%. In five- and ten-dimensional publisher-subscriber differential games with anisotropic noise, the proposed approach consistently outperforms direct PINN solvers, yielding smoother value functions and lower residuals. Our results suggest that integrating PINNs with policy iteration is a practical and theoretically grounded method for solving high-dimensional, nonconvex HJI equations, with potential applications in robotics, finance, and multi-agent reinforcement learning.
Similar Papers
Physics-informed approach for exploratory Hamilton--Jacobi--Bellman equations via policy iterations
Numerical Analysis
Teaches robots to learn tasks faster and smarter.
Neural Policy Iteration for Stochastic Optimal Control: A Physics-Informed Approach
Machine Learning (CS)
Helps robots learn tasks faster and more reliably.
On the Convergence of the Policy Iteration for Infinite-Horizon Nonlinear Optimal Control Problems
Optimization and Control
Makes robots learn better and faster.