Score: 0

Robust Batched Bandits

Published: October 4, 2025 | arXiv ID: 2510.03798v1

By: Yunwen Guo , Yunlun Shu , Gongyi Zhuo and more

Potential Business Impact:

Finds best treatments faster, even with messy results.

Business Areas:
A/B Testing Data and Analytics

The batched multi-armed bandit (MAB) problem, in which rewards are collected in batches, is crucial for applications such as clinical trials. Existing research predominantly assumes light-tailed reward distributions, yet many real-world scenarios, including clinical outcomes, exhibit heavy-tailed characteristics. This paper bridges this gap by proposing robust batched bandit algorithms designed for heavy-tailed rewards, within both finite-arm and Lipschitz-continuous settings. We reveal a surprising phenomenon: in the instance-independent regime, as well as in the Lipschitz setting, heavier-tailed rewards necessitate a smaller number of batches to achieve near-optimal regret. In stark contrast, for the instance-dependent setting, the required number of batches to attain near-optimal regret remains invariant with respect to tail heaviness.

Page Count
39 pages

Category
Computer Science:
Machine Learning (CS)