Regret Analysis of Posterior Sampling-Based Expected Improvement for Bayesian Optimization
By: Shion Takeno , Yu Inatsu , Masayuki Karasuyama and more
Potential Business Impact:
Finds best answers faster for hard problems.
Bayesian optimization is a powerful tool for optimizing an expensive-to-evaluate black-box function. In particular, the effectiveness of expected improvement (EI) has been demonstrated in a wide range of applications. However, theoretical analyses of EI are limited compared with other theoretically established algorithms. This paper analyzes a randomized variant of EI, which evaluates the EI from the maximum of the posterior sample path. We show that this posterior sampling-based random EI achieves the sublinear Bayesian cumulative regret bounds under the assumption that the black-box function follows a Gaussian process. Finally, we demonstrate the effectiveness of the proposed method through numerical experiments.
Similar Papers
Bayesian Optimization with Expected Improvement: No Regret and the Choice of Incumbent
Machine Learning (Stat)
Finds best answers faster for tricky problems.
On the convergence rate of noisy Bayesian Optimization with Expected Improvement
Machine Learning (Stat)
Finds best settings faster, even with messy data.
Direct Regret Optimization in Bayesian Optimization
Machine Learning (CS)
Finds best answers faster by learning from many tries.