Dynamic Priors in Bayesian Optimization for Hyperparameter Optimization
By: Lukas Fehring , Marcel Wever , Maximilian Spliethöver and more
Potential Business Impact:
Lets users guide smart computer learning.
Hyperparameter optimization (HPO), for example, based on Bayesian optimization (BO), supports users in designing models well-suited for a given dataset. HPO has proven its effectiveness on several applications, ranging from classical machine learning for tabular data to deep neural networks for computer vision and transformers for natural language processing. However, HPO still sometimes lacks acceptance by machine learning experts due to its black-box nature and limited user control. Addressing this, first approaches have been proposed to initialize BO methods with expert knowledge. However, these approaches do not allow for online steering during the optimization process. In this paper, we introduce a novel method that enables repeated interventions to steer BO via user input, specifying expert knowledge and user preferences at runtime of the HPO process in the form of prior distributions. To this end, we generalize an existing method, $\pi$BO, preserving theoretical guarantees. We also introduce a misleading prior detection scheme, which allows protection against harmful user inputs. In our experimental evaluation, we demonstrate that our method can effectively incorporate multiple priors, leveraging informative priors, whereas misleading priors are reliably rejected or overcome. Thereby, we achieve competitiveness to unperturbed BO.
Similar Papers
Informed Initialization for Bayesian Optimization and Active Learning
Machine Learning (CS)
Finds best settings faster for computers.
Clustering-based Meta Bayesian Optimization with Theoretical Guarantee
Machine Learning (CS)
Finds best settings faster, even with many past tries.
Iterated Population Based Training with Task-Agnostic Restarts
Machine Learning (CS)
**Finds best computer learning settings automatically.**