How Exploration Breaks Cooperation in Shared-Policy Multi-Agent Reinforcement Learning
By: Yi-Ning Weng, Hsuan-Wei Lee
Potential Business Impact:
Makes AI teams learn to work together better.
Multi-agent reinforcement learning in dynamic social dilemmas commonly relies on parameter sharing to enable scalability. We show that in shared-policy Deep Q-Network learning, standard exploration can induce a robust and systematic collapse of cooperation even in environments where fully cooperative equilibria are stable and payoff dominant. Through controlled experiments, we demonstrate that shared DQN converges to stable but persistently low-cooperation regimes. This collapse is not caused by reward misalignment, noise, or insufficient training, but by a representational failure arising from partial observability combined with parameter coupling across heterogeneous agent states. Exploration-driven updates bias the shared representation toward locally dominant defection responses, which then propagate across agents and suppress cooperative learning. We confirm that the failure persists across network sizes, exploration schedules, and payoff structures, and disappears when parameter sharing is removed or when agents maintain independent representations. These results identify a fundamental failure mode of shared-policy MARL and establish structural conditions under which scalable learning architectures can systematically undermine cooperation. Our findings provide concrete guidance for the design of multi-agent learning systems in social and economic environments where collective behavior is critical.
Similar Papers
Emergent Coordination and Phase Structure in Independent Multi-Agent Reinforcement Learning
Machine Learning (CS)
Helps AI agents learn to work together better.
Remembering the Markov Property in Cooperative MARL
Machine Learning (CS)
Teaches robots to work together by learning rules.
Empirical Study on Robustness and Resilience in Cooperative Multi-Agent Reinforcement Learning
Multiagent Systems
Makes AI teams work well even when things go wrong.