Bayesian Optimization of Robustness Measures under Input Uncertainty: A Randomized Gaussian Process Upper Confidence Bound Approach
By: Yu Inatsu
Potential Business Impact:
Finds best settings even with guesswork.
Bayesian optimization based on the Gaussian process upper confidence bound (GP-UCB) offers a theoretical guarantee for optimizing black-box functions. In practice, however, black-box functions often involve input uncertainty. To handle such cases, GP-UCB can be extended to optimize evaluation criteria known as robustness measures. However, GP-UCB-based methods for robustness measures require a trade-off parameter, $\beta$, which, as in the original GP-UCB, must be set sufficiently large to ensure theoretical validity. In this study, we propose randomized robustness measure GP-UCB (RRGP-UCB), a novel method that samples $\beta$ from a chi-squared-based probability distribution. This approach eliminates the need to explicitly specify $\beta$. Notably, the expected value of $\beta$ under this distribution is not excessively large. Furthermore, we show that RRGP-UCB provides tight bounds on the expected regret between the optimal and estimated solutions. Numerical experiments demonstrate the effectiveness of the proposed method.
Similar Papers
Robust Bayesian Optimisation with Unbounded Corruptions
Machine Learning (Stat)
Protects smart systems from bad data.
Improved Regret Bounds for Gaussian Process Upper Confidence Bound in Bayesian Optimization
Machine Learning (CS)
Makes smart guessing programs learn faster.
Function-on-Function Bayesian Optimization
Machine Learning (Stat)
Finds best settings for complex computer programs.