Score: 1

Learning-Based Stable Optimal Control for Infinite-Time Nonlinear Regulation Problems

Published: June 12, 2025 | arXiv ID: 2506.10291v1

By: Han Wang , Di Wu , Lin Cheng and more

Potential Business Impact:

Makes robots fly safely without crashing.

Business Areas:
Embedded Systems Hardware, Science and Engineering, Software

Infinite-time nonlinear optimal regulation control is widely utilized in aerospace engineering as a systematic method for synthesizing stable controllers. However, conventional methods often rely on linearization hypothesis, while recent learning-based approaches rarely consider stability guarantees. This paper proposes a learning-based framework to learn a stable optimal controller for nonlinear optimal regulation problems. First, leveraging the equivalence between Pontryagin Maximum Principle (PMP) and Hamilton-Jacobi-Bellman (HJB) equation, we improve the backward generation of optimal examples (BGOE) method for infinite-time optimal regulation problems. A state-transition-matrix-guided data generation method is then proposed to efficiently generate a complete dataset that covers the desired state space. Finally, we incorporate the Lyapunov stability condition into the learning framework, ensuring the stability of the learned optimal policy by jointly learning the optimal value function and control policy. Simulations on three nonlinear optimal regulation problems show that the learned optimal policy achieves near-optimal regulation control and the code is provided at https://github.com/wong-han/PaperNORC

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
24 pages

Category
Electrical Engineering and Systems Science:
Systems and Control