Certifying Stability of Reinforcement Learning Policies using Generalized Lyapunov Functions
By: Kehan Long, Jorge Cortés, Nikolay Atanasov
Potential Business Impact:
Makes smart robots safer by checking their moves.
We study the problem of certifying the stability of closed-loop systems under control policies derived from optimal control or reinforcement learning (RL). Classical Lyapunov methods require a strict step-wise decrease in the Lyapunov function but such a certificate is difficult to construct for a learned control policy. The value function associated with an RL policy is a natural Lyapunov function candidate but it is not clear how it should be modified. To gain intuition, we first study the linear quadratic regulator (LQR) problem and make two key observations. First, a Lyapunov function can be obtained from the value function of an LQR policy by augmenting it with a residual term related to the system dynamics and stage cost. Second, the classical Lyapunov decrease requirement can be relaxed to a generalized Lyapunov condition requiring only decrease on average over multiple time steps. Using this intuition, we consider the nonlinear setting and formulate an approach to learn generalized Lyapunov functions by augmenting RL value functions with neural network residual terms. Our approach successfully certifies the stability of RL policies trained on Gymnasium and DeepMind Control benchmarks. We also extend our method to jointly train neural controllers and stability certificates using a multi-step Lyapunov loss, resulting in larger certified inner approximations of the region of attraction compared to the classical Lyapunov approach. Overall, our formulation enables stability certification for a broad class of systems with learned policies by making certificates easier to construct, thereby bridging classical control theory and modern learning-based methods.
Similar Papers
Off Policy Lyapunov Stability in Reinforcement Learning
Systems and Control
Makes robots learn safely and faster.
Stability Enhancement in Reinforcement Learning via Adaptive Control Lyapunov Function
Machine Learning (CS)
Makes robots learn safely without breaking things.
A Review On Safe Reinforcement Learning Using Lyapunov and Barrier Functions
Systems and Control
Keeps smart machines from making dangerous mistakes.