Bridging Continuous-time LQR and Reinforcement Learning via Gradient Flow of the Bellman Error
By: Armin Gießler, Albertus Johannes Malan, Sören Hohmann
Potential Business Impact:
Makes robots learn faster and better.
In this paper, we present a novel method for computing the optimal feedback gain of the infinite-horizon Linear Quadratic Regulator (LQR) problem via an ordinary differential equation. We introduce a novel continuous-time Bellman error, derived from the Hamilton-Jacobi-Bellman (HJB) equation, which quantifies the suboptimality of stabilizing policies and is parametrized in terms of the feedback gain. We analyze its properties, including its effective domain, smoothness, coerciveness and show the existence of a unique stationary point within the stability region. Furthermore, we derive a closed-form gradient expression of the Bellman error that induces a gradient flow. This converges to the optimal feedback and generates a unique trajectory which exclusively comprises stabilizing feedback policies. Additionally, this work advances interesting connections between LQR theory and Reinforcement Learning (RL) by redefining suboptimality of the Algebraic Riccati Equation (ARE) as a Bellman error, adapting a state-independent formulation, and leveraging Lyapunov equations to overcome the infinite-horizon challenge. We validate our method in a simulation and compare it to the state of the art.
Similar Papers
Optimal Output Feedback Learning Control for Discrete-Time Linear Quadratic Regulation
Systems and Control
Teaches robots to learn how to control things.
Is Bellman Equation Enough for Learning Control?
Machine Learning (CS)
Makes smart robots learn the right way.
Learning-Based Stable Optimal Control for Infinite-Time Nonlinear Regulation Problems
Systems and Control
Makes robots fly safely without crashing.