Deep Reinforcement Learning for Multi-Agent Coordination
By: Kehinde O. Aina, Sehoon Ha
Potential Business Impact:
Robots work together better in crowded spaces.
We address the challenge of coordinating multiple robots in narrow and confined environments, where congestion and interference often hinder collective task performance. Drawing inspiration from insect colonies, which achieve robust coordination through stigmergy -- modifying and interpreting environmental traces -- we propose a Stigmergic Multi-Agent Deep Reinforcement Learning (S-MADRL) framework that leverages virtual pheromones to model local and social interactions, enabling decentralized emergent coordination without explicit communication. To overcome the convergence and scalability limitations of existing algorithms such as MADQN, MADDPG, and MAPPO, we leverage curriculum learning, which decomposes complex tasks into progressively harder sub-problems. Simulation results show that our framework achieves the most effective coordination of up to eight agents, where robots self-organize into asymmetric workload distributions that reduce congestion and modulate group performance. This emergent behavior, analogous to strategies observed in nature, demonstrates a scalable solution for decentralized multi-agent coordination in crowded environments with communication constraints.
Similar Papers
From Pheromones to Policies: Reinforcement Learning for Engineered Biological Swarms
Artificial Intelligence
Makes tiny worms learn and change tasks.
Strategic Coordination for Evolving Multi-agent Systems: A Hierarchical Reinforcement and Collective Learning Approach
Multiagent Systems
Helps robots work together better and smarter.
Scalable Multi Agent Diffusion Policies for Coverage Control
Robotics
Robots work together better, like a team.