Lyapunov Stability Learning with Nonlinear Control via Inductive Biases
By: Yupu Lu , Shijie Lin , Hao Xu and more
Potential Business Impact:
Makes robots safer by learning to avoid mistakes.
Finding a control Lyapunov function (CLF) in a dynamical system with a controller is an effective way to guarantee stability, which is a crucial issue in safety-concerned applications. Recently, deep learning models representing CLFs have been applied into a learner-verifier framework to identify satisfiable candidates. However, the learner treats Lyapunov conditions as complex constraints for optimisation, which is hard to achieve global convergence. It is also too complicated to implement these Lyapunov conditions for verification. To improve this framework, we treat Lyapunov conditions as inductive biases and design a neural CLF and a CLF-based controller guided by this knowledge. This design enables a stable optimisation process with limited constraints, and allows end-to-end learning of both the CLF and the controller. Our approach achieves a higher convergence rate and larger region of attraction (ROA) in learning the CLF compared to existing methods among abundant experiment cases. We also thoroughly reveal why the success rate decreases with previous methods during learning.
Similar Papers
Stability Enhancement in Reinforcement Learning via Adaptive Control Lyapunov Function
Machine Learning (CS)
Makes robots learn safely without breaking things.
Chasing Stability: Humanoid Running via Control Lyapunov Function Guided Reinforcement Learning
Robotics
Robot learns to run and stay balanced.
Neural Incremental Input-to-State Stable Control Lyapunov Functions for Unknown Continuous-time Systems
Systems and Control
Makes machines learn to control themselves safely.