Score: 0

Lyapunov Stability Learning with Nonlinear Control via Inductive Biases

Published: November 3, 2025 | arXiv ID: 2511.01283v1

By: Yupu Lu , Shijie Lin , Hao Xu and more

Potential Business Impact:

Makes robots safer by learning to avoid mistakes.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Finding a control Lyapunov function (CLF) in a dynamical system with a controller is an effective way to guarantee stability, which is a crucial issue in safety-concerned applications. Recently, deep learning models representing CLFs have been applied into a learner-verifier framework to identify satisfiable candidates. However, the learner treats Lyapunov conditions as complex constraints for optimisation, which is hard to achieve global convergence. It is also too complicated to implement these Lyapunov conditions for verification. To improve this framework, we treat Lyapunov conditions as inductive biases and design a neural CLF and a CLF-based controller guided by this knowledge. This design enables a stable optimisation process with limited constraints, and allows end-to-end learning of both the CLF and the controller. Our approach achieves a higher convergence rate and larger region of attraction (ROA) in learning the CLF compared to existing methods among abundant experiment cases. We also thoroughly reveal why the success rate decreases with previous methods during learning.

Country of Origin
🇭🇰 Hong Kong

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)