Score: 0

Off Policy Lyapunov Stability in Reinforcement Learning

Published: September 11, 2025 | arXiv ID: 2509.09863v1

By: Sarvan Gill, Daniela Constantinescu

Potential Business Impact:

Makes robots learn safely and faster.

Business Areas:
Embedded Systems Hardware, Science and Engineering, Software

Traditional reinforcement learning lacks the ability to provide stability guarantees. More recent algorithms learn Lyapunov functions alongside the control policies to ensure stable learning. However, the current self-learned Lyapunov functions are sample inefficient due to their on-policy nature. This paper introduces a method for learning Lyapunov functions off-policy and incorporates the proposed off-policy Lyapunov function into the Soft Actor Critic and Proximal Policy Optimization algorithms to provide them with a data efficient stability certificate. Simulations of an inverted pendulum and a quadrotor illustrate the improved performance of the two algorithms when endowed with the proposed off-policy Lyapunov function.

Country of Origin
🇨🇦 Canada

Page Count
10 pages

Category
Electrical Engineering and Systems Science:
Systems and Control