Empirical Bound Information-Directed Sampling for Norm-Agnostic Bandits
By: Piotr M. Suder, Eric Laber
Potential Business Impact:
Improves computer learning by guessing better.
Information-directed sampling (IDS) is a powerful framework for solving bandit problems which has shown strong results in both Bayesian and frequentist settings. However, frequentist IDS, like many other bandit algorithms, requires that one have prior knowledge of a (relatively) tight upper bound on the norm of the true parameter vector governing the reward model in order to achieve good performance. Unfortunately, this requirement is rarely satisfied in practice. As we demonstrate, using a poorly calibrated bound can lead to significant regret accumulation. To address this issue, we introduce a novel frequentist IDS algorithm that iteratively refines a high-probability upper bound on the true parameter norm using accumulating data. We focus on the linear bandit setting with heteroskedastic subgaussian noise. Our method leverages a mixture of relevant information gain criteria to balance exploration aimed at tightening the estimated parameter norm bound and directly searching for the optimal action. We establish regret bounds for our algorithm that do not depend on an initially assumed parameter norm bound and demonstrate that our method outperforms state-of-the-art IDS and UCB algorithms.
Similar Papers
Sparse Optimistic Information Directed Sampling
Machine Learning (CS)
Helps computers learn faster with less data.
Simulation-Based Inference for Adaptive Experiments
Methodology
Finds best treatments faster, helps more people.
A Control Theory inspired Exploration Method for a Linear Bandit driven by a Linear Gaussian Dynamical System
Systems and Control
Helps computers learn faster by trying new things.