An Efficient Approach for Cooperative Multi-Agent Learning Problems
By: Ángel Aso-Mollar, Eva Onaindia
Potential Business Impact:
Teaches robots to work together better.
In this article, we propose a centralized Multi-Agent Learning framework for learning a policy that models the simultaneous behavior of multiple agents that need to coordinate to solve a certain task. Centralized approaches often suffer from the explosion of an action space that is defined by all possible combinations of individual actions, known as joint actions. Our approach addresses the coordination problem via a sequential abstraction, which overcomes the scalability problems typical to centralized methods. It introduces a meta-agent, called \textit{supervisor}, which abstracts joint actions as sequential assignments of actions to each agent. This sequential abstraction not only simplifies the centralized joint action space but also enhances the framework's scalability and efficiency. Our experimental results demonstrate that the proposed approach successfully coordinates agents across a variety of Multi-Agent Learning environments of diverse sizes.
Similar Papers
Contextual Knowledge Sharing in Multi-Agent Reinforcement Learning with Decentralized Communication and Coordination
Multiagent Systems
Helps robots work together, even with different goals.
Collaborative Multi-Agent Reinforcement Learning Approach for Elastic Cloud Resource Scaling
Distributed, Parallel, and Cluster Computing
Makes cloud computers adjust power automatically.
Distributed Koopman Operator Learning from Sequential Observations
Systems and Control
Helps many robots learn together, even with bad signals.