Score: 0

Dynamic Priors in Bayesian Optimization for Hyperparameter Optimization

Published: November 4, 2025 | arXiv ID: 2511.02570v1

By: Lukas Fehring , Marcel Wever , Maximilian Spliethöver and more

Potential Business Impact:

Lets users guide smart computer learning.

Business Areas:
A/B Testing Data and Analytics

Hyperparameter optimization (HPO), for example, based on Bayesian optimization (BO), supports users in designing models well-suited for a given dataset. HPO has proven its effectiveness on several applications, ranging from classical machine learning for tabular data to deep neural networks for computer vision and transformers for natural language processing. However, HPO still sometimes lacks acceptance by machine learning experts due to its black-box nature and limited user control. Addressing this, first approaches have been proposed to initialize BO methods with expert knowledge. However, these approaches do not allow for online steering during the optimization process. In this paper, we introduce a novel method that enables repeated interventions to steer BO via user input, specifying expert knowledge and user preferences at runtime of the HPO process in the form of prior distributions. To this end, we generalize an existing method, $\pi$BO, preserving theoretical guarantees. We also introduce a misleading prior detection scheme, which allows protection against harmful user inputs. In our experimental evaluation, we demonstrate that our method can effectively incorporate multiple priors, leveraging informative priors, whereas misleading priors are reliably rejected or overcome. Thereby, we achieve competitiveness to unperturbed BO.

Country of Origin
🇩🇪 Germany

Page Count
27 pages

Category
Computer Science:
Machine Learning (CS)