Learning-Based Stable Optimal Control for Infinite-Time Nonlinear Regulation Problems
By: Han Wang , Di Wu , Lin Cheng and more
Potential Business Impact:
Makes robots fly safely without crashing.
Infinite-time nonlinear optimal regulation control is widely utilized in aerospace engineering as a systematic method for synthesizing stable controllers. However, conventional methods often rely on linearization hypothesis, while recent learning-based approaches rarely consider stability guarantees. This paper proposes a learning-based framework to learn a stable optimal controller for nonlinear optimal regulation problems. First, leveraging the equivalence between Pontryagin Maximum Principle (PMP) and Hamilton-Jacobi-Bellman (HJB) equation, we improve the backward generation of optimal examples (BGOE) method for infinite-time optimal regulation problems. A state-transition-matrix-guided data generation method is then proposed to efficiently generate a complete dataset that covers the desired state space. Finally, we incorporate the Lyapunov stability condition into the learning framework, ensuring the stability of the learned optimal policy by jointly learning the optimal value function and control policy. Simulations on three nonlinear optimal regulation problems show that the learned optimal policy achieves near-optimal regulation control and the code is provided at https://github.com/wong-han/PaperNORC
Similar Papers
Optimal Output Feedback Learning Control for Discrete-Time Linear Quadratic Regulation
Systems and Control
Teaches robots to learn how to control things.
Data-Driven Nonlinear Regulation: Gaussian Process Learning
Systems and Control
Teaches machines to control things better.
Nonlinear Robust Optimization for Planning and Control
Systems and Control
Keeps robots moving safely despite unexpected bumps.