Score: 0

Provable Accelerated Bayesian Optimization with Knowledge Transfer

Published: November 5, 2025 | arXiv ID: 2511.03125v1

By: Haitao Lin , Boxin Zhao , Mladen Kolar and more

Potential Business Impact:

Teaches computers to learn new tasks faster.

Business Areas:
A/B Testing Data and Analytics

We study how Bayesian optimization (BO) can be accelerated on a target task with historical knowledge transferred from related source tasks. Existing works on BO with knowledge transfer either do not have theoretical guarantees or achieve the same regret as BO in the non-transfer setting, $\tilde{\mathcal{O}}(\sqrt{T \gamma_f})$, where $T$ is the number of evaluations of the target function and $\gamma_f$ denotes its information gain. In this paper, we propose the DeltaBO algorithm, in which a novel uncertainty-quantification approach is built on the difference function $\delta$ between the source and target functions, which are allowed to belong to different reproducing kernel Hilbert spaces (RKHSs). Under mild assumptions, we prove that the regret of DeltaBO is of order $\tilde{\mathcal{O}}(\sqrt{T (T/N + \gamma_\delta)})$, where $N$ denotes the number of evaluations from source tasks and typically $N \gg T$. In many applications, source and target tasks are similar, which implies that $\gamma_\delta$ can be much smaller than $\gamma_f$. Empirical studies on both real-world hyperparameter tuning tasks and synthetic functions show that DeltaBO outperforms other baseline methods and support our theoretical claims.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
24 pages

Category
Statistics:
Machine Learning (Stat)