Bayesian Optimization with Expected Improvement: No Regret and the Choice of Incumbent
By: Jingyi Wang , Haowei Wang , Szu Hui Ng and more
Potential Business Impact:
Finds best answers faster for tricky problems.
Expected improvement (EI) is one of the most widely used acquisition functions in Bayesian optimization (BO). Despite its proven empirical success in applications, the cumulative regret upper bound of EI remains an open question. In this paper, we analyze the classic noisy Gaussian process expected improvement (GP-EI) algorithm. We consider the Bayesian setting, where the objective is a sample from a GP. Three commonly used incumbents, namely the best posterior mean incumbent (BPMI), the best sampled posterior mean incumbent (BSPMI), and the best observation incumbent (BOI) are considered as the choices of the current best value in GP-EI. We present for the first time the cumulative regret upper bounds of GP-EI with BPMI and BSPMI. Importantly, we show that in both cases, GP-EI is a no-regret algorithm for both squared exponential (SE) and Mat\'ern kernels. Further, we present for the first time that GP-EI with BOI either achieves a sublinear cumulative regret upper bound or has a fast converging noisy simple regret bound for SE and Mat\'ern kernels. Our results provide theoretical guidance to the choice of incumbent when practitioners apply GP-EI in the noisy setting. Numerical experiments are conducted to validate our findings.
Similar Papers
Regret Analysis of Posterior Sampling-Based Expected Improvement for Bayesian Optimization
Machine Learning (Stat)
Finds best answers faster for hard problems.
On the convergence rate of noisy Bayesian Optimization with Expected Improvement
Machine Learning (Stat)
Finds best settings faster, even with messy data.
Convergence Rates of Constrained Expected Improvement
Machine Learning (Stat)
Finds best answers with tricky rules.