On Instability of Minimax Optimal Optimism-Based Bandit Algorithms
By: Samya Praharaj, Koulik Khamaru
Potential Business Impact:
Makes smart computer choices less predictable.
Statistical inference from data generated by multi-armed bandit (MAB) algorithms is challenging due to their adaptive, non-i.i.d. nature. A classical manifestation is that sample averages of arm rewards under bandit sampling may fail to satisfy a central limit theorem. Lai and Wei's stability condition provides a sufficient, and essentially necessary criterion, for asymptotic normality in bandit problems. While the celebrated Upper Confidence Bound (UCB) algorithm satisfies this stability condition, it is not minimax optimal, raising the question of whether minimax optimality and statistical stability can be achieved simultaneously. In this paper, we analyze the stability properties of a broad class of bandit algorithms that are based on the optimism principle. We establish general structural conditions under which such algorithms violate the Lai-Wei stability criterion. As a consequence, we show that widely used minimax-optimal UCB-style algorithms, including MOSS, Anytime-MOSS, Vanilla-MOSS, ADA-UCB, OC-UCB, KL-MOSS, KL-UCB++, KL-UCB-SWITCH, and Anytime KL-UCB-SWITCH, are unstable. We further complement our theoretical results with numerical simulations demonstrating that, in all these cases, the sample means fail to exhibit asymptotic normality. Overall, our findings suggest a fundamental tension between stability and minimax optimal regret, raising the question of whether it is possible to design bandit algorithms that achieve both. Understanding whether such simultaneously stable and minimax optimal strategies exist remains an important open direction.
Similar Papers
Near-Optimal Regret for Efficient Stochastic Combinatorial Semi-Bandits
Machine Learning (CS)
Helps computers pick best options faster.
Statistical Inference under Adaptive Sampling with LinUCB
Statistics Theory
Makes computer learning more accurate and trustworthy.
Algorithm Design and Stronger Guarantees for the Improving Multi-Armed Bandits Problem
Machine Learning (CS)
Helps computers pick the best option faster.