Score: 0

Learning to Coordinate Under Threshold Rewards: A Cooperative Multi-Agent Bandit Framework

Published: June 18, 2025 | arXiv ID: 2506.15856v1

By: Michael Ledford, William Regli

Potential Business Impact:

Helps robots work together to get rewards.

Business Areas:
Collaborative Consumption Collaboration

Cooperative multi-agent systems often face tasks that require coordinated actions under uncertainty. While multi-armed bandit (MAB) problems provide a powerful framework for decentralized learning, most prior work assumes individually attainable rewards. We address the challenging setting where rewards are threshold-activated: an arm yields a payoff only when a minimum number of agents pull it simultaneously, with this threshold unknown in advance. Complicating matters further, some arms are decoys - requiring coordination to activate but yielding no reward - introducing a new challenge of wasted joint exploration. We introduce Threshold-Coop-UCB (T-Coop-UCB), a decentralized algorithm that enables agents to jointly learn activation thresholds and reward distributions, forming effective coalitions without centralized control. Empirical results show that T-Coop-UCB consistently outperforms baseline methods in cumulative reward, regret, and coordination metrics, achieving near-Oracle performance. Our findings underscore the importance of joint threshold learning and decoy avoidance for scalable, decentralized cooperation in complex multi-agent

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
Multiagent Systems