Score: 1

Policy Gradient Adaptive Control for the LQR: Indirect and Direct Approaches

Published: May 6, 2025 | arXiv ID: 2505.03706v2

By: Feiran Zhao, Alessandro Chiuso, Florian Dörfler

Potential Business Impact:

Makes robots learn to move better and faster.

Business Areas:
Autonomous Vehicles Transportation

Motivated by recent advances of reinforcement learning and direct data-driven control, we propose policy gradient adaptive control (PGAC) for the linear quadratic regulator (LQR), which uses online closed-loop data to improve the control policy while maintaining stability. Our method adaptively updates the policy in feedback by descending the gradient of the LQR cost and is categorized as indirect, when gradients are computed via an estimated model, versus direct, when gradients are derived from data using sample covariance parameterization. Beyond the vanilla gradient, we also showcase the merits of the natural gradient and Gauss-Newton methods for the policy update. Notably, natural gradient descent bridges the indirect and direct PGAC, and the Gauss-Newton method of the indirect PGAC leads to an adaptive version of the celebrated Hewer's algorithm. To account for the uncertainty from noise, we propose a regularization method for both indirect and direct PGAC. For all the considered PGAC approaches, we show closed-loop stability and convergence of the policy to the optimal LQR gain. Simulations validate our theoretical findings and demonstrate the robustness and computational efficiency of PGAC.

Country of Origin
🇨🇭 🇮🇹 Switzerland, Italy

Page Count
16 pages

Category
Mathematics:
Optimization and Control