Collaborative Min-Max Regret in Grouped Multi-Armed Bandits
By: Moïse Blanchard, Vineet Goyal
Potential Business Impact:
Helps groups share learning to find best choices faster.
We study the impact of sharing exploration in multi-armed bandits in a grouped setting where a set of groups have overlapping feasible action sets [Baek and Farias '24]. In this grouped bandit setting, groups share reward observations, and the objective is to minimize the collaborative regret, defined as the maximum regret across groups. This naturally captures applications in which one aims to balance the exploration burden between groups or populations -- it is known that standard algorithms can lead to significantly imbalanced exploration cost between groups. We address this problem by introducing an algorithm Col-UCB that dynamically coordinates exploration across groups. We show that Col-UCB achieves both optimal minimax and instance-dependent collaborative regret up to logarithmic factors. These bounds are adaptive to the structure of shared action sets between groups, providing insights into when collaboration yields significant benefits over each group learning their best action independently.
Similar Papers
On the optimal regret of collaborative personalized linear bandits
Machine Learning (CS)
Helps many AI agents learn faster together.
Multi-Agent Stage-wise Conservative Linear Bandits
Machine Learning (CS)
Helps many AI agents learn safely together.
Distributed Algorithms for Multi-Agent Multi-Armed Bandits with Collision
Machine Learning (CS)
Helps players get more rewards without talking.