Revisiting Regret Benchmarks in Online Non-Stochastic Control
By: Vijeth Hebbar, Cédric Langbort
Potential Business Impact:
Teaches robots to learn better without knowing all rules.
In the online non-stochastic control problem, an agent sequentially selects control inputs for a linear dynamical system when facing unknown and adversarially selected convex costs and disturbances. A common metric for evaluating control policies in this setting is policy regret, defined relative to the best-in-hindsight linear feedback controller. However, for general convex costs, this benchmark may be less meaningful since linear controllers can be highly suboptimal. To address this, we introduce an alternative, more suitable benchmark--the performance of the best fixed input. We show that this benchmark can be viewed as a natural extension of the standard benchmark used in online convex optimization and propose a novel online control algorithm that achieves sublinear regret with respect to this new benchmark. We also discuss the connections between our method and the original one proposed by Agarwal et al. in their seminal work introducing the online non-stochastic control problem, and compare the performance of both approaches through numerical simulations.
Similar Papers
Regret Bounds for Robust Online Decision Making
Machine Learning (CS)
Helps computers learn from uncertain information.
Robust Regret Control with Uncertainty-Dependent Baseline
Optimization and Control
Helps machines learn better with unknown problems.
Beyond Worst-Case Online Classification: VC-Based Regret Bounds for Relaxed Benchmarks
Machine Learning (Stat)
Makes computer learning better with less data.