A Tight Lower Bound for Non-stochastic Multi-armed Bandits with Expert Advice
By: Zachary Chase, Shinji Ito, Idan Mehalel
Potential Business Impact:
Helps computers pick the best option faster.
We determine the minimax optimal expected regret in the classic non-stochastic multi-armed bandit with expert advice problem, by proving a lower bound that matches the upper bound of Kale (2014). The two bounds determine the minimax optimal expected regret to be $\Theta\left( \sqrt{T K \log (N/K) } \right)$, where $K$ is the number of arms, $N$ is the number of experts, and $T$ is the time horizon.
Similar Papers
Stochastic Bandits for Crowdsourcing and Multi-Platform Autobidding
CS and Game Theory
Helps spend money fairly on many tasks.
Algorithm Design and Stronger Guarantees for the Improving Multi-Armed Bandits Problem
Machine Learning (CS)
Helps computers pick the best option faster.
Improved Regret Bounds for Linear Bandits with Heavy-Tailed Rewards
Machine Learning (CS)
Helps computers learn faster with unpredictable rewards.