Sparse Nonparametric Contextual Bandits
By: Hamish Flynn, Julia Olkhovskaya, Paul Rognon-Vael
Potential Business Impact:
Helps computers learn best choices faster.
This paper studies the problem of simultaneously learning relevant features and minimising regret in contextual bandit problems. We introduce and analyse a new class of contextual bandit problems, called sparse nonparametric contextual bandits, in which the expected reward function lies in the linear span of a small unknown set of features that belongs to a known infinite set of candidate features. We consider two notions of sparsity, for which the set of candidate features is either countable or uncountable. Our contribution is two-fold. First, we provide lower bounds on the minimax regret, which show that polynomial dependence on the number of actions is generally unavoidable in this setting. Second, we show that a variant of the Feel-Good Thompson Sampling algorithm enjoys regret bounds that match our lower bounds up to logarithmic factors of the horizon, and have logarithmic dependence on the effective number of candidate features. When we apply our results to kernelised and neural contextual bandits, we find that sparsity always enables better regret bounds, as long as the horizon is large enough relative to the sparsity and the number of actions.
Similar Papers
Sparse Additive Contextual Bandits: A Nonparametric Approach for Online Decision-making with High-dimensional Covariates
Machine Learning (Stat)
Helps computers learn better with lots of information.
Navigating Sparsities in High-Dimensional Linear Contextual Bandits
Statistics Theory
Teaches computers to make better choices faster.
Adaptive Data Augmentation for Thompson Sampling
Machine Learning (Stat)
Learns the best choices faster for rewards.