Score: 0

Fully-Decentralized MADDPG with Networked Agents

Published: March 9, 2025 | arXiv ID: 2503.06747v1

By: Diego Bolliger, Lorenz Zauter, Robert Ziegler

Potential Business Impact:

Lets many AI agents learn to work together.

Business Areas:
Peer to Peer Collaboration

In this paper, we devise three actor-critic algorithms with decentralized training for multi-agent reinforcement learning in cooperative, adversarial, and mixed settings with continuous action spaces. To this goal, we adapt the MADDPG algorithm by applying a networked communication approach between agents. We introduce surrogate policies in order to decentralize the training while allowing for local communication during training. The decentralized algorithms achieve comparable results to the original MADDPG in empirical tests, while reducing computational cost. This is more pronounced with larger numbers of agents.

Country of Origin
🇨🇭 Switzerland

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)