Score: 1

Convergence Rates of Constrained Expected Improvement

Published: May 16, 2025 | arXiv ID: 2505.11323v1

By: Haowei Wang , Jingyi Wang , Zhongxiang Dai and more

Potential Business Impact:

Finds best answers with tricky rules.

Business Areas:
A/B Testing Data and Analytics

Constrained Bayesian optimization (CBO) methods have seen significant success in black-box optimization with constraints, and one of the most commonly used CBO methods is the constrained expected improvement (CEI) algorithm. CEI is a natural extension of the expected improvement (EI) when constraints are incorporated. However, the theoretical convergence rate of CEI has not been established. In this work, we study the convergence rate of CEI by analyzing its simple regret upper bound. First, we show that when the objective function $f$ and constraint function $c$ are assumed to each lie in a reproducing kernel Hilbert space (RKHS), CEI achieves the convergence rates of $\mathcal{O} \left(t^{-\frac{1}{2}}\log^{\frac{d+1}{2}}(t) \right) \ \text{and }\ \mathcal{O}\left(t^{\frac{-\nu}{2\nu+d}} \log^{\frac{\nu}{2\nu+d}}(t)\right)$ for the commonly used squared exponential and Mat\'{e}rn kernels, respectively. Second, we show that when $f$ and $c$ are assumed to be sampled from Gaussian processes (GPs), CEI achieves the same convergence rates with a high probability. Numerical experiments are performed to validate the theoretical analysis.

Country of Origin
πŸ‡­πŸ‡° πŸ‡ΈπŸ‡¬ Hong Kong, Singapore

Page Count
23 pages

Category
Statistics:
Machine Learning (Stat)