Score: 1

Bayesian Optimization from Human Feedback: Near-Optimal Regret Bounds

Published: May 29, 2025 | arXiv ID: 2505.23673v1

By: Aya Kayal , Sattar Vakili , Laura Toni and more

Potential Business Impact:

Finds best choices with fewer guesses.

Business Areas:
A/B Testing Data and Analytics

Bayesian optimization (BO) with preference-based feedback has recently garnered significant attention due to its emerging applications. We refer to this problem as Bayesian Optimization from Human Feedback (BOHF), which differs from conventional BO by learning the best actions from a reduced feedback model, where only the preference between two actions is revealed to the learner at each time step. The objective is to identify the best action using a limited number of preference queries, typically obtained through costly human feedback. Existing work, which adopts the Bradley-Terry-Luce (BTL) feedback model, provides regret bounds for the performance of several algorithms. In this work, within the same framework we develop tighter performance guarantees. Specifically, we derive regret bounds of $\tilde{\mathcal{O}}(\sqrt{\Gamma(T)T})$, where $\Gamma(T)$ represents the maximum information gain$\unicode{x2014}$a kernel-specific complexity term$\unicode{x2014}$and $T$ is the number of queries. Our results significantly improve upon existing bounds. Notably, for common kernels, we show that the order-optimal sample complexities of conventional BO$\unicode{x2014}$achieved with richer feedback models$\unicode{x2014}$are recovered. In other words, the same number of preferential samples as scalar-valued samples is sufficient to find a nearly optimal solution.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)