Learning to Coordinate Under Threshold Rewards: A Cooperative Multi-Agent Bandit Framework
By: Michael Ledford, William Regli
Potential Business Impact:
Helps robots work together to get rewards.
Cooperative multi-agent systems often face tasks that require coordinated actions under uncertainty. While multi-armed bandit (MAB) problems provide a powerful framework for decentralized learning, most prior work assumes individually attainable rewards. We address the challenging setting where rewards are threshold-activated: an arm yields a payoff only when a minimum number of agents pull it simultaneously, with this threshold unknown in advance. Complicating matters further, some arms are decoys - requiring coordination to activate but yielding no reward - introducing a new challenge of wasted joint exploration. We introduce Threshold-Coop-UCB (T-Coop-UCB), a decentralized algorithm that enables agents to jointly learn activation thresholds and reward distributions, forming effective coalitions without centralized control. Empirical results show that T-Coop-UCB consistently outperforms baseline methods in cumulative reward, regret, and coordination metrics, achieving near-Oracle performance. Our findings underscore the importance of joint threshold learning and decoy avoidance for scalable, decentralized cooperation in complex multi-agent
Similar Papers
Multi-thresholding Good Arm Identification with Bandit Feedback
Machine Learning (CS)
Finds the best option when there are many goals.
Multi-Agent Stage-wise Conservative Linear Bandits
Machine Learning (CS)
Helps many AI agents learn safely together.
Collaborative Min-Max Regret in Grouped Multi-Armed Bandits
Machine Learning (CS)
Helps groups share learning to find best choices faster.