Efficient Best-of-Both-Worlds Algorithms for Contextual Combinatorial Semi-Bandits
By: Mengmeng Li , Philipp Schneider , Jelisaveta Aleksić and more
Potential Business Impact:
Makes smart systems learn faster and better.
We introduce the first best-of-both-worlds algorithm for contextual combinatorial semi-bandits that simultaneously guarantees $\widetilde{\mathcal{O}}(\sqrt{T})$ regret in the adversarial regime and $\widetilde{\mathcal{O}}(\ln T)$ regret in the corrupted stochastic regime. Our approach builds on the Follow-the-Regularized-Leader (FTRL) framework equipped with a Shannon entropy regularizer, yielding a flexible method that admits efficient implementations. Beyond regret bounds, we tackle the practical bottleneck in FTRL (or, equivalently, Online Stochastic Mirror Descent) arising from the high-dimensional projection step encountered in each round of interaction. By leveraging the Karush-Kuhn-Tucker conditions, we transform the $K$-dimensional convex projection problem into a single-variable root-finding problem, dramatically accelerating each round. Empirical evaluations demonstrate that this combined strategy not only attains the attractive regret bounds of best-of-both-worlds algorithms but also delivers substantial per-round speed-ups, making it well-suited for large-scale, real-time applications.
Similar Papers
Follow-the-Perturbed-Leader Approaches Best-of-Both-Worlds for the m-Set Semi-Bandit Problems
Machine Learning (CS)
Helps computers pick best options faster.
Heavy-tailed Linear Bandits: Adversarial Robustness, Best-of-both-worlds, and Beyond
Machine Learning (CS)
Helps computers learn better with tricky, unpredictable data.
Follow-the-Perturbed-Leader for Decoupled Bandits: Best-of-Both-Worlds and Practicality
Machine Learning (Stat)
Learns faster by trying and using options.