Score: 0

Optimal Arm Elimination Algorithms for Combinatorial Bandits

Published: October 28, 2025 | arXiv ID: 2510.23992v1

By: Yuxiao Wen, Yanjun Han, Zhengyuan Zhou

Potential Business Impact:

Helps choose the best options faster.

Business Areas:
A/B Testing Data and Analytics

Combinatorial bandits extend the classical bandit framework to settings where the learner selects multiple arms in each round, motivated by applications such as online recommendation and assortment optimization. While extensions of upper confidence bound (UCB) algorithms arise naturally in this context, adapting arm elimination methods has proved more challenging. We introduce a novel elimination scheme that partitions arms into three categories (confirmed, active, and eliminated), and incorporates explicit exploration to update these sets. We demonstrate the efficacy of our algorithm in two settings: the combinatorial multi-armed bandit with general graph feedback, and the combinatorial linear contextual bandit. In both cases, our approach achieves near-optimal regret, whereas UCB-based methods can provably fail due to insufficient explicit exploration. Matching lower bounds are also provided.

Country of Origin
🇺🇸 United States

Page Count
35 pages

Category
Computer Science:
Machine Learning (CS)