Functional multi-armed bandit and the best function identification problems
By: Yuriy Dorn , Aleksandr Katrutsa , Ilgam Latypov and more
Potential Business Impact:
Teaches computers to learn faster from mistakes.
Bandit optimization usually refers to the class of online optimization problems with limited feedback, namely, a decision maker uses only the objective value at the current point to make a new decision and does not have access to the gradient of the objective function. While this name accurately captures the limitation in feedback, it is somehow misleading since it does not have any connection with the multi-armed bandits (MAB) problem class. We propose two new classes of problems: the functional multi-armed bandit problem (FMAB) and the best function identification problem. They are modifications of a multi-armed bandit problem and the best arm identification problem, respectively, where each arm represents an unknown black-box function. These problem classes are a surprisingly good fit for modeling real-world problems such as competitive LLM training. To solve the problems from these classes, we propose a new reduction scheme to construct UCB-type algorithms, namely, the F-LCB algorithm, based on algorithms for nonlinear optimization with known convergence rates. We provide the regret upper bounds for this reduction scheme based on the base algorithms' convergence rates. We add numerical experiments that demonstrate the performance of the proposed scheme.
Similar Papers
Stochastic Multi-Objective Multi-Armed Bandits: Regret Definition and Algorithm
Machine Learning (CS)
Helps computers choose best options with many goals.
Semi-Parametric Batched Global Multi-Armed Bandits with Covariates
Machine Learning (Stat)
Helps computers learn better from grouped information.
A Frequency-Domain Analysis of the Multi-Armed Bandit Problem: A New Perspective on the Exploration-Exploitation Trade-off
Machine Learning (CS)
Helps computers learn faster by seeing patterns.