Score: 0

How Exploration Breaks Cooperation in Shared-Policy Multi-Agent Reinforcement Learning

Published: January 9, 2026 | arXiv ID: 2601.05509v1

By: Yi-Ning Weng, Hsuan-Wei Lee

Potential Business Impact:

Makes AI teams learn to work together better.

Business Areas:
Collaborative Consumption Collaboration

Multi-agent reinforcement learning in dynamic social dilemmas commonly relies on parameter sharing to enable scalability. We show that in shared-policy Deep Q-Network learning, standard exploration can induce a robust and systematic collapse of cooperation even in environments where fully cooperative equilibria are stable and payoff dominant. Through controlled experiments, we demonstrate that shared DQN converges to stable but persistently low-cooperation regimes. This collapse is not caused by reward misalignment, noise, or insufficient training, but by a representational failure arising from partial observability combined with parameter coupling across heterogeneous agent states. Exploration-driven updates bias the shared representation toward locally dominant defection responses, which then propagate across agents and suppress cooperative learning. We confirm that the failure persists across network sizes, exploration schedules, and payoff structures, and disappears when parameter sharing is removed or when agents maintain independent representations. These results identify a fundamental failure mode of shared-policy MARL and establish structural conditions under which scalable learning architectures can systematically undermine cooperation. Our findings provide concrete guidance for the design of multi-agent learning systems in social and economic environments where collective behavior is critical.

Country of Origin
🇺🇸 United States

Page Count
38 pages

Category
Computer Science:
Multiagent Systems