Sample-Adaptivity Tradeoff in On-Demand Sampling
By: Nika Haghtalab, Omar Montasser, Mingda Qiao
Potential Business Impact:
Helps computers learn faster with fewer tries.
We study the tradeoff between sample complexity and round complexity in on-demand sampling, where the learning algorithm adaptively samples from $k$ distributions over a limited number of rounds. In the realizable setting of Multi-Distribution Learning (MDL), we show that the optimal sample complexity of an $r$-round algorithm scales approximately as $dk^{Θ(1/r)} / ε$. For the general agnostic case, we present an algorithm that achieves near-optimal sample complexity of $\widetilde O((d + k) / ε^2)$ within $\widetilde O(\sqrt{k})$ rounds. Of independent interest, we introduce a new framework, Optimization via On-Demand Sampling (OODS), which abstracts the sample-adaptivity tradeoff and captures most existing MDL algorithms. We establish nearly tight bounds on the round complexity in the OODS setting. The upper bounds directly yield the $\widetilde O(\sqrt{k})$-round algorithm for agnostic MDL, while the lower bounds imply that achieving sub-polynomial round complexity would require fundamentally new techniques that bypass the inherent hardness of OODS.
Similar Papers
Learning Partitions with Optimal Query and Round Complexities
Data Structures and Algorithms
Groups items into categories faster with fewer questions.
Sparse Optimistic Information Directed Sampling
Machine Learning (CS)
Helps computers learn faster with less data.
Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction
Machine Learning (CS)
Teaches robots to learn safely in new places.