Remembering the Markov Property in Cooperative MARL
By: Kale-ab Abebe Tessera , Leonard Hinckeldey , Riccardo Zamboni and more
Potential Business Impact:
Teaches robots to work together by learning rules.
Plain English Summary
Imagine teaching a group of robots to work together, like a team of delivery drones. This research found that current methods often teach them to follow simple, pre-set routines rather than truly understand each other and the situation. This means if one robot deviates or a new robot joins, the whole team can fall apart. The real goal is to train them to be adaptable and truly understand their teammates, so they can work reliably in any situation, not just with the exact same partners they trained with.
Cooperative multi-agent reinforcement learning (MARL) is typically formalised as a Decentralised Partially Observable Markov Decision Process (Dec-POMDP), where agents must reason about the environment and other agents' behaviour. In practice, current model-free MARL algorithms use simple recurrent function approximators to address the challenge of reasoning about others using partial information. In this position paper, we argue that the empirical success of these methods is not due to effective Markov signal recovery, but rather to learning simple conventions that bypass environment observations and memory. Through a targeted case study, we show that co-adapting agents can learn brittle conventions, which then fail when partnered with non-adaptive agents. Crucially, the same models can learn grounded policies when the task design necessitates it, revealing that the issue is not a fundamental limitation of the learning models but a failure of the benchmark design. Our analysis also suggests that modern MARL environments may not adequately test the core assumptions of Dec-POMDPs. We therefore advocate for new cooperative environments built upon two core principles: (1) behaviours grounded in observations and (2) memory-based reasoning about other agents, ensuring success requires genuine skill rather than fragile, co-adapted agreements.
Similar Papers
Contextual Knowledge Sharing in Multi-Agent Reinforcement Learning with Decentralized Communication and Coordination
Multiagent Systems
Helps robots work together, even with different goals.
Explaining Decentralized Multi-Agent Reinforcement Learning Policies
Artificial Intelligence
Helps people understand how AI teams work together.
From Pixels to Cooperation Multi Agent Reinforcement Learning based on Multimodal World Models
Multiagent Systems
Teaches robots to work together using sight and sound.