Robust Bayesian Optimization via Tempered Posteriors
By: Jiguang Li, Hengrui Luo
Potential Business Impact:
Makes computer learning more accurate and reliable.
Bayesian optimization (BO) iteratively fits a Gaussian process (GP) surrogate to accumulated evaluations and selects new queries via an acquisition function such as expected improvement (EI). In practice, BO often concentrates evaluations near the current incumbent, causing the surrogate to become overconfident and to understate predictive uncertainty in the region guiding subsequent decisions. We develop a robust GP-based BO via tempered posterior updates, which downweight the likelihood by a power $α\in (0,1]$ to mitigate overconfidence under local misspecification. We establish cumulative regret bounds for tempered BO under a family of generalized improvement rules, including EI, and show that tempering yields strictly sharper worst-case regret guarantees than the standard posterior $(α=1)$, with the most favorable guarantees occurring near the classical EI choice. Motivated by our theoretic findings, we propose a prequential procedure for selecting $α$ online: it decreases $α$ when realized prediction errors exceed model-implied uncertainty and returns $α$ toward one as calibration improves. Empirical results demonstrate that tempering provides a practical yet theoretically grounded tool for stabilizing BO surrogates under localized sampling.
Similar Papers
Direct Regret Optimization in Bayesian Optimization
Machine Learning (CS)
Finds best answers faster by learning from many tries.
Bayesian Optimization with Expected Improvement: No Regret and the Choice of Incumbent
Machine Learning (Stat)
Finds best answers faster for tricky problems.
Tempering the Bayes Filter towards Improved Model-Based Estimation
Systems and Control
Makes computer guesses better when information is missing.