Score: 0

Bayesian Optimization with Expected Improvement: No Regret and the Choice of Incumbent

Published: August 21, 2025 | arXiv ID: 2508.15674v1

By: Jingyi Wang , Haowei Wang , Szu Hui Ng and more

Potential Business Impact:

Finds best answers faster for tricky problems.

Business Areas:
Business Intelligence Data and Analytics

Expected improvement (EI) is one of the most widely used acquisition functions in Bayesian optimization (BO). Despite its proven empirical success in applications, the cumulative regret upper bound of EI remains an open question. In this paper, we analyze the classic noisy Gaussian process expected improvement (GP-EI) algorithm. We consider the Bayesian setting, where the objective is a sample from a GP. Three commonly used incumbents, namely the best posterior mean incumbent (BPMI), the best sampled posterior mean incumbent (BSPMI), and the best observation incumbent (BOI) are considered as the choices of the current best value in GP-EI. We present for the first time the cumulative regret upper bounds of GP-EI with BPMI and BSPMI. Importantly, we show that in both cases, GP-EI is a no-regret algorithm for both squared exponential (SE) and Mat\'ern kernels. Further, we present for the first time that GP-EI with BOI either achieves a sublinear cumulative regret upper bound or has a fast converging noisy simple regret bound for SE and Mat\'ern kernels. Our results provide theoretical guidance to the choice of incumbent when practitioners apply GP-EI in the noisy setting. Numerical experiments are conducted to validate our findings.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
62 pages

Category
Statistics:
Machine Learning (Stat)