Policy Gradient Method for LQG Control via Input-Output-History Representation: Convergence to $O(ε)$-Stationary Points
By: Tomonori Sadamoto, Takashi Tanaka
Potential Business Impact:
Makes robots learn to control things better.
We study the policy gradient method (PGM) for the linear quadratic Gaussian (LQG) dynamic output-feedback control problem using an input-output-history (IOH) representation of the closed-loop system. First, we show that any dynamic output-feedback controller is equivalent to a static partial-state feedback gain for a new system representation characterized by a finite-length IOH. Leveraging this equivalence, we reformulate the search for an optimal dynamic output feedback controller as an optimization problem over the corresponding partial-state feedback gain. Next, we introduce a relaxed version of the IOH-based LQG problem by incorporating a small process noise with covariance $\epsilon I$ into the new system to ensure coerciveness, a key condition for establishing gradient-based convergence guarantees. Consequently, we show that a vanilla PGM for the relaxed problem converges to an $\mathcal{O}(\epsilon)$-stationary point, i.e., $\overline{K}$ satisfying $\|\nabla J(\overline{K})\|_F \leq \mathcal{O}(\epsilon)$, where $J$ denotes the original LQG cost. Numerical experiments empirically indicate convergence to the vicinity of the globally optimal LQG controller.
Similar Papers
Policy Gradient Adaptive Control for the LQR: Indirect and Direct Approaches
Optimization and Control
Makes robots learn to move better and faster.
Proximal Gradient Dynamics and Feedback Control for Equality-Constrained Composite Optimization
Optimization and Control
Solves hard math problems faster for computers.
Second-Order Policy Gradient Methods for the Linear Quadratic Regulator
Systems and Control
Makes robots learn tasks much faster.