Score: 0

A Tight Lower Bound for Non-stochastic Multi-armed Bandits with Expert Advice

Published: October 31, 2025 | arXiv ID: 2511.00257v1

By: Zachary Chase, Shinji Ito, Idan Mehalel

Potential Business Impact:

Helps computers pick the best option faster.

Business Areas:
A/B Testing Data and Analytics

We determine the minimax optimal expected regret in the classic non-stochastic multi-armed bandit with expert advice problem, by proving a lower bound that matches the upper bound of Kale (2014). The two bounds determine the minimax optimal expected regret to be $\Theta\left( \sqrt{T K \log (N/K) } \right)$, where $K$ is the number of arms, $N$ is the number of experts, and $T$ is the time horizon.

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)