Optimal Arm Elimination Algorithms for Combinatorial Bandits
By: Yuxiao Wen, Yanjun Han, Zhengyuan Zhou
Potential Business Impact:
Helps choose the best options faster.
Combinatorial bandits extend the classical bandit framework to settings where the learner selects multiple arms in each round, motivated by applications such as online recommendation and assortment optimization. While extensions of upper confidence bound (UCB) algorithms arise naturally in this context, adapting arm elimination methods has proved more challenging. We introduce a novel elimination scheme that partitions arms into three categories (confirmed, active, and eliminated), and incorporates explicit exploration to update these sets. We demonstrate the efficacy of our algorithm in two settings: the combinatorial multi-armed bandit with general graph feedback, and the combinatorial linear contextual bandit. In both cases, our approach achieves near-optimal regret, whereas UCB-based methods can provably fail due to insufficient explicit exploration. Matching lower bounds are also provided.
Similar Papers
Algorithm Design and Stronger Guarantees for the Improving Multi-Armed Bandits Problem
Machine Learning (CS)
Helps computers pick the best option faster.
Cascading Bandits With Feedback
Machine Learning (CS)
Helps smart devices choose the best AI model.
Oracle-Efficient Combinatorial Semi-Bandits
Machine Learning (Stat)
Makes smart choices faster with fewer guesses.