Bayesian Optimization from Human Feedback: Near-Optimal Regret Bounds
By: Aya Kayal , Sattar Vakili , Laura Toni and more
Potential Business Impact:
Finds best choices with fewer guesses.
Bayesian optimization (BO) with preference-based feedback has recently garnered significant attention due to its emerging applications. We refer to this problem as Bayesian Optimization from Human Feedback (BOHF), which differs from conventional BO by learning the best actions from a reduced feedback model, where only the preference between two actions is revealed to the learner at each time step. The objective is to identify the best action using a limited number of preference queries, typically obtained through costly human feedback. Existing work, which adopts the Bradley-Terry-Luce (BTL) feedback model, provides regret bounds for the performance of several algorithms. In this work, within the same framework we develop tighter performance guarantees. Specifically, we derive regret bounds of $\tilde{\mathcal{O}}(\sqrt{\Gamma(T)T})$, where $\Gamma(T)$ represents the maximum information gain$\unicode{x2014}$a kernel-specific complexity term$\unicode{x2014}$and $T$ is the number of queries. Our results significantly improve upon existing bounds. Notably, for common kernels, we show that the order-optimal sample complexities of conventional BO$\unicode{x2014}$achieved with richer feedback models$\unicode{x2014}$are recovered. In other words, the same number of preferential samples as scalar-valued samples is sufficient to find a nearly optimal solution.
Similar Papers
Direct Regret Optimization in Bayesian Optimization
Machine Learning (CS)
Finds best answers faster by learning from many tries.
Improved Regret Bounds for Gaussian Process Upper Confidence Bound in Bayesian Optimization
Machine Learning (CS)
Makes smart guessing programs learn faster.
Provable Accelerated Bayesian Optimization with Knowledge Transfer
Machine Learning (Stat)
Teaches computers to learn new tasks faster.