Score: 0

Sample-Adaptivity Tradeoff in On-Demand Sampling

Published: November 19, 2025 | arXiv ID: 2511.15507v1

By: Nika Haghtalab, Omar Montasser, Mingda Qiao

Potential Business Impact:

Helps computers learn faster with fewer tries.

Business Areas:
A/B Testing Data and Analytics

We study the tradeoff between sample complexity and round complexity in on-demand sampling, where the learning algorithm adaptively samples from $k$ distributions over a limited number of rounds. In the realizable setting of Multi-Distribution Learning (MDL), we show that the optimal sample complexity of an $r$-round algorithm scales approximately as $dk^{Θ(1/r)} / ε$. For the general agnostic case, we present an algorithm that achieves near-optimal sample complexity of $\widetilde O((d + k) / ε^2)$ within $\widetilde O(\sqrt{k})$ rounds. Of independent interest, we introduce a new framework, Optimization via On-Demand Sampling (OODS), which abstracts the sample-adaptivity tradeoff and captures most existing MDL algorithms. We establish nearly tight bounds on the round complexity in the OODS setting. The upper bounds directly yield the $\widetilde O(\sqrt{k})$-round algorithm for agnostic MDL, while the lower bounds imply that achieving sub-polynomial round complexity would require fundamentally new techniques that bypass the inherent hardness of OODS.

Page Count
50 pages

Category
Computer Science:
Machine Learning (CS)