Convergence Rates of Constrained Expected Improvement
By: Haowei Wang , Jingyi Wang , Zhongxiang Dai and more
Potential Business Impact:
Finds best answers with tricky rules.
Constrained Bayesian optimization (CBO) methods have seen significant success in black-box optimization with constraints, and one of the most commonly used CBO methods is the constrained expected improvement (CEI) algorithm. CEI is a natural extension of the expected improvement (EI) when constraints are incorporated. However, the theoretical convergence rate of CEI has not been established. In this work, we study the convergence rate of CEI by analyzing its simple regret upper bound. First, we show that when the objective function $f$ and constraint function $c$ are assumed to each lie in a reproducing kernel Hilbert space (RKHS), CEI achieves the convergence rates of $\mathcal{O} \left(t^{-\frac{1}{2}}\log^{\frac{d+1}{2}}(t) \right) \ \text{and }\ \mathcal{O}\left(t^{\frac{-\nu}{2\nu+d}} \log^{\frac{\nu}{2\nu+d}}(t)\right)$ for the commonly used squared exponential and Mat\'{e}rn kernels, respectively. Second, we show that when $f$ and $c$ are assumed to be sampled from Gaussian processes (GPs), CEI achieves the same convergence rates with a high probability. Numerical experiments are performed to validate the theoretical analysis.
Similar Papers
On the convergence rate of noisy Bayesian Optimization with Expected Improvement
Machine Learning (Stat)
Finds best settings faster, even with messy data.
Bayesian Optimization with Expected Improvement: No Regret and the Choice of Incumbent
Machine Learning (Stat)
Finds best answers faster for tricky problems.
Regret Analysis of Posterior Sampling-Based Expected Improvement for Bayesian Optimization
Machine Learning (Stat)
Finds best answers faster for hard problems.