Score: 0

Towards Scalable Bayesian Optimization via Gradient-Informed Bayesian Neural Networks

Published: April 14, 2025 | arXiv ID: 2504.10076v2

By: Georgios Makrygiorgos, Joshua Hang Sai Ip, Ali Mesbah

Potential Business Impact:

Makes computer learning faster by using more math.

Business Areas:
A/B Testing Data and Analytics

Bayesian optimization (BO) is a widely used method for data-driven optimization that generally relies on zeroth-order data of objective function to construct probabilistic surrogate models. These surrogates guide the exploration-exploitation process toward finding global optimum. While Gaussian processes (GPs) are commonly employed as surrogates of the unknown objective function, recent studies have highlighted the potential of Bayesian neural networks (BNNs) as scalable and flexible alternatives. Moreover, incorporating gradient observations into GPs, when available, has been shown to improve BO performance. However, the use of gradients within BNN surrogates remains unexplored. By leveraging automatic differentiation, gradient information can be seamlessly integrated into BNN training, resulting in more informative surrogates for BO. We propose a gradient-informed loss function for BNN training, effectively augmenting function observations with local gradient information. The effectiveness of this approach is demonstrated on well-known benchmarks in terms of improved BNN predictions and faster BO convergence as the number of decision variables increases.

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)