Bayesian Optimization under Uncertainty for Training a Scale Parameter in Stochastic Models
By: Akash Yadav, Ruda Zhang
Potential Business Impact:
Finds best settings for tricky computer programs faster.
Hyperparameter tuning is a challenging problem especially when the system itself involves uncertainty. Due to noisy function evaluations, optimization under uncertainty can be computationally expensive. In this paper, we present a novel Bayesian optimization framework tailored for hyperparameter tuning under uncertainty, with a focus on optimizing a scale- or precision-type parameter in stochastic models. The proposed method employs a statistical surrogate for the underlying random variable, enabling analytical evaluation of the expectation operator. Moreover, we derive a closed-form expression for the optimizer of the random acquisition function, which significantly reduces computational cost per iteration. Compared with a conventional one-dimensional Monte Carlo-based optimization scheme, the proposed approach requires 40 times fewer data points, resulting in up to a 40-fold reduction in computational cost. We demonstrate the effectiveness of the proposed method through two numerical examples in computational engineering.
Similar Papers
Bayesian Optimization Parameter Tuning Framework for a Lyapunov Based Path Following Controller
Robotics
Robot learns to drive better with fewer tries.
Bayesian Optimization for Intrinsically Noisy Response Surfaces
Methodology
Improves experiments, saving time and money.
Uncertainty-Aware Strategies: A Model-Agnostic Framework for Robust Financial Optimization through Subsampling
Computational Finance
Helps money decisions be safer with uncertain numbers.