Fully-Decentralized MADDPG with Networked Agents
By: Diego Bolliger, Lorenz Zauter, Robert Ziegler
Potential Business Impact:
Lets many AI agents learn to work together.
In this paper, we devise three actor-critic algorithms with decentralized training for multi-agent reinforcement learning in cooperative, adversarial, and mixed settings with continuous action spaces. To this goal, we adapt the MADDPG algorithm by applying a networked communication approach between agents. We introduce surrogate policies in order to decentralize the training while allowing for local communication during training. The decentralized algorithms achieve comparable results to the original MADDPG in empirical tests, while reducing computational cost. This is more pronounced with larger numbers of agents.
Similar Papers
Scalable Multi Agent Diffusion Policies for Coverage Control
Robotics
Robots work together better, like a team.
An Improved Multi-Agent Algorithm for Cooperative and Competitive Environments by Identifying and Encouraging Cooperation among Agents
Multiagent Systems
Teaches AI teams to work together for better results.
A Digital Twin-based Multi-Agent Reinforcement Learning Framework for Vehicle-to-Grid Coordination
Distributed, Parallel, and Cluster Computing
Lets electric cars share power without sharing secrets.