On the Problem of Best Arm Retention
By: Houshuang Chen, Yuchen He, Chihao Zhang
Potential Business Impact:
Keeps the best choices from a changing list.
This paper presents a comprehensive study on the problem of Best Arm Retention (BAR), which has recently found applications in streaming algorithms for multi-armed bandits. In the BAR problem, the goal is to retain $m$ arms with the best arm included from $n$ after some trials, in stochastic multi-armed bandit settings. We first investigate pure exploration for the BAR problem under different criteria, and then minimize the regret with specific constraints, in the context of further exploration in streaming algorithms. - We begin by revisiting the lower bound for the $(\varepsilon,\delta)$-PAC algorithm for Best Arm Identification (BAI) and adapt the classical KL-divergence argument to derive optimal bounds for $(\varepsilon,\delta)$-PAC algorithms for BAR. - We further study another variant of the problem, called $r$-BAR, which requires the expected gap between the best arm and the optimal arm retained is less than $r$. We prove tight sample complexity for the problem. - We explore the regret minimization problem for $r$-BAR and develop algorithm beyond pure exploration. We conclude with a conjecture on the optimal regret in this setting.
Similar Papers
Near Optimal Best Arm Identification for Clustered Bandits
Machine Learning (CS)
Helps many robots learn the best action faster.
Risk-sensitive Bandits: Arm Mixture Optimality and Regret-efficient Algorithms
Machine Learning (Stat)
Helps computers choose the best option, even when risky.
Tight Gap-Dependent Memory-Regret Trade-Off for Single-Pass Streaming Stochastic Multi-Armed Bandits
Machine Learning (CS)
Helps computers learn with less memory.