Score: 0

Physics-informed approach for exploratory Hamilton--Jacobi--Bellman equations via policy iterations

Published: August 3, 2025 | arXiv ID: 2508.01720v1

By: Yeongjong Kim , Namkyeong Cho , Minseok Kim and more

Potential Business Impact:

Teaches robots to learn tasks faster and smarter.

We propose a mesh-free policy iteration framework based on physics-informed neural networks (PINNs) for solving entropy-regularized stochastic control problems. The method iteratively alternates between soft policy evaluation and improvement using automatic differentiation and neural approximation, without relying on spatial discretization. We present a detailed $L^2$ error analysis that decomposes the total approximation error into three sources: iteration error, policy network error, and PDE residual error. The proposed algorithm is validated with a range of challenging control tasks, including high-dimensional linear-quadratic regulation in 5D and 10D, as well as nonlinear systems such as pendulum and cartpole problems. Numerical results confirm the scalability, accuracy, and robustness of our approach across both linear and nonlinear benchmarks.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
9 pages

Category
Mathematics:
Numerical Analysis (Math)