Gradient-based Sample Selection for Faster Bayesian Optimization
By: Qiyu Wei , Haowei Wang , Zirui Cao and more
Potential Business Impact:
Makes computer searches faster by picking smart data.
Bayesian optimization (BO) is an effective technique for black-box optimization. However, its applicability is typically limited to moderate-budget problems due to the cubic complexity in computing the Gaussian process (GP) surrogate model. In large-budget scenarios, directly employing the standard GP model faces significant challenges in computational time and resource requirements. In this paper, we propose a novel approach, gradient-based sample selection Bayesian Optimization (GSSBO), to enhance the computational efficiency of BO. The GP model is constructed on a selected set of samples instead of the whole dataset. These samples are selected by leveraging gradient information to maintain diversity and representation. We provide a theoretical analysis of the gradient-based sample selection strategy and obtain explicit sublinear regret bounds for our proposed framework. Extensive experiments on synthetic and real-world tasks demonstrate that our approach significantly reduces the computational cost of GP fitting in BO while maintaining optimization performance comparable to baseline methods.
Similar Papers
Towards Scalable Bayesian Optimization via Gradient-Informed Bayesian Neural Networks
Machine Learning (CS)
Makes computer learning faster by using more math.
Co-Learning Bayesian Optimization
Machine Learning (CS)
Finds best answers faster by using many smart guesses.
Scalable Neural Network-based Blackbox Optimization
Machine Learning (CS)
Finds best answers faster, even with many choices.