Score: 0

Sparse Optimistic Information Directed Sampling

Published: October 28, 2025 | arXiv ID: 2510.24234v1

By: Ludovic Schwartz, Hamish Flynn, Gergely Neu

Potential Business Impact:

Helps computers learn faster with less data.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

Many high-dimensional online decision-making problems can be modeled as stochastic sparse linear bandits. Most existing algorithms are designed to achieve optimal worst-case regret in either the data-rich regime, where polynomial dependence on the ambient dimension is unavoidable, or the data-poor regime, where dimension-independence is possible at the cost of worse dependence on the number of rounds. In contrast, the sparse Information Directed Sampling (IDS) algorithm satisfies a Bayesian regret bound that has the optimal rate in both regimes simultaneously. In this work, we explore the use of Sparse Optimistic Information Directed Sampling (SOIDS) to achieve the same adaptivity in the worst-case setting, without Bayesian assumptions. Through a novel analysis that enables the use of a time-dependent learning rate, we show that SOIDS can optimally balance information and regret. Our results extend the theoretical guarantees of IDS, providing the first algorithm that simultaneously achieves optimal worst-case regret in both the data-rich and data-poor regimes. We empirically demonstrate the good performance of SOIDS.

Country of Origin
🇪🇸 Spain

Page Count
38 pages

Category
Computer Science:
Machine Learning (CS)