NeST-BO: Fast Local Bayesian Optimization via Newton-Step Targeting of Gradient and Hessian Information
By: Wei-Ting Tang, Akshay Kudva, Joel A. Paulson
Potential Business Impact:
Finds best settings faster in complex problems.
Bayesian optimization (BO) is effective for expensive black-box problems but remains challenging in high dimensions. We propose NeST-BO, a local BO method that targets the Newton step by jointly learning gradient and Hessian information with Gaussian process surrogates, and selecting evaluations via a one-step lookahead bound on Newton-step error. We show that this bound (and hence the step error) contracts with batch size, so NeST-BO directly inherits inexact-Newton convergence: global progress under mild stability assumptions and quadratic local rates once steps are sufficiently accurate. To scale, we optimize the acquisition in low-dimensional subspaces (e.g., random embeddings or learned sparse subspaces), reducing the dominant cost of learning curvature from $O(d^2)$ to $O(m^2)$ with $m \ll d$ while preserving step targeting. Across high-dimensional synthetic and real-world problems, including cases with thousands of variables and unknown active subspaces, NeST-BO consistently yields faster convergence and lower regret than state-of-the-art local and high-dimensional BO baselines.
Similar Papers
Gradient-based Sample Selection for Faster Bayesian Optimization
Machine Learning (Stat)
Makes computer searches faster by picking smart data.
Enhancing Trust-Region Bayesian Optimization via Newton Methods
Machine Learning (CS)
Finds best settings faster in complex problems.
Towards Scalable Bayesian Optimization via Gradient-Informed Bayesian Neural Networks
Machine Learning (CS)
Makes computer learning faster by using more math.