Beyond Quadratic Costs: A Bregman Divergence Approach to H$_\infty$ Control
By: Joudi Hajar, Reza Ghane, Babak Hassibi
Potential Business Impact:
Makes robots safer and more efficient.
In the past couple of decades, non-quadratic convex penalties have reshaped signal processing and machine learning; in robust control, however, general convex costs break the Riccati and storage function structure that make the design tractable. Practitioners thus default to approximations, heuristics or robust model predictive control that are solved online for short horizons. We close this gap by extending $H_\infty$ control of discrete-time linear systems to strictly convex penalties on state, input, and disturbance, recasting the objective with Bregman divergences that admit a completion-of-squares decomposition. The result is a closed-form, time-invariant, full-information stabilizing controller that minimizes a worst-case performance ratio over the infinite horizon. Necessary and sufficient existence/optimality conditions are given by a Riccati-like identity together with a concavity requirement; with quadratic costs, these collapse to the classical $H_\infty$ algebraic Riccati equation and the associated negative-semidefinite condition, recovering the linear central controller. Otherwise, the optimal controller is nonlinear and can enable safety envelopes, sparse actuation, and bang-bang policies with rigorous $H_\infty$ guarantees.
Similar Papers
Beyond Quadratic Costs in LQR: Bregman Divergence Control
Systems and Control
Makes robots smarter and safer with new math.
Policy Optimization in Robust Control: Weak Convexity and Subgradient Methods
Optimization and Control
Makes robots smarter and more reliable.
Optimality of Linear Policies in Distributionally Robust Linear Quadratic Control
Optimization and Control
Makes robots learn better even with bad information.