Robust Bayesian Optimisation with Unbounded Corruptions
By: Abdelhamid Ezzerg, Ilija Bogunovic, Jeremias Knoblauch
Potential Business Impact:
Protects smart systems from bad data.
Bayesian Optimization is critically vulnerable to extreme outliers. Existing provably robust methods typically assume a bounded cumulative corruption budget, which makes them defenseless against even a single corruption of sufficient magnitude. To address this, we introduce a new adversary whose budget is only bounded in the frequency of corruptions, not in their magnitude. We then derive RCGP-UCB, an algorithm coupling the famous upper confidence bound (UCB) approach with a Robust Conjugate Gaussian Process (RCGP). We present stable and adaptive versions of RCGP-UCB, and prove that they achieve sublinear regret in the presence of up to $O(T^{1/2})$ and $O(T^{1/3})$ corruptions with possibly infinite magnitude. This robustness comes at near zero cost: without outliers, RCGP-UCB's regret bounds match those of the standard GP-UCB algorithm.
Similar Papers
Bayesian Optimization of Robustness Measures under Input Uncertainty: A Randomized Gaussian Process Upper Confidence Bound Approach
Machine Learning (Stat)
Finds best settings even with guesswork.
Improved Regret Bounds for Gaussian Process Upper Confidence Bound in Bayesian Optimization
Machine Learning (CS)
Makes smart guessing programs learn faster.
Unconstrained Robust Online Convex Optimization
Machine Learning (CS)
Learns from bad information without getting lost.