Sparse Optimistic Information Directed Sampling
By: Ludovic Schwartz, Hamish Flynn, Gergely Neu
Potential Business Impact:
Helps computers learn faster with less data.
Many high-dimensional online decision-making problems can be modeled as stochastic sparse linear bandits. Most existing algorithms are designed to achieve optimal worst-case regret in either the data-rich regime, where polynomial dependence on the ambient dimension is unavoidable, or the data-poor regime, where dimension-independence is possible at the cost of worse dependence on the number of rounds. In contrast, the sparse Information Directed Sampling (IDS) algorithm satisfies a Bayesian regret bound that has the optimal rate in both regimes simultaneously. In this work, we explore the use of Sparse Optimistic Information Directed Sampling (SOIDS) to achieve the same adaptivity in the worst-case setting, without Bayesian assumptions. Through a novel analysis that enables the use of a time-dependent learning rate, we show that SOIDS can optimally balance information and regret. Our results extend the theoretical guarantees of IDS, providing the first algorithm that simultaneously achieves optimal worst-case regret in both the data-rich and data-poor regimes. We empirically demonstrate the good performance of SOIDS.
Similar Papers
Information-directed sampling for bandits: a primer
Machine Learning (CS)
Teaches computers to learn best by trying things.
Empirical Bound Information-Directed Sampling for Norm-Agnostic Bandits
Machine Learning (Stat)
Improves computer learning by guessing better.
Sample-Adaptivity Tradeoff in On-Demand Sampling
Machine Learning (CS)
Helps computers learn faster with fewer tries.