Multi-Agent Guided Policy Optimization
By: Yueheng Li, Guangming Xie, Zongqing Lu
Potential Business Impact:
Helps many robots learn to work together better.
Due to practical constraints such as partial observability and limited communication, Centralized Training with Decentralized Execution (CTDE) has become the dominant paradigm in cooperative Multi-Agent Reinforcement Learning (MARL). However, existing CTDE methods often underutilize centralized training or lack theoretical guarantees. We propose Multi-Agent Guided Policy Optimization (MAGPO), a novel framework that better leverages centralized training by integrating centralized guidance with decentralized execution. MAGPO uses an auto-regressive joint policy for scalable, coordinated exploration and explicitly aligns it with decentralized policies to ensure deployability under partial observability. We provide theoretical guarantees of monotonic policy improvement and empirically evaluate MAGPO on 43 tasks across 6 diverse environments. Results show that MAGPO consistently outperforms strong CTDE baselines and matches or surpasses fully centralized approaches, offering a principled and practical solution for decentralized multi-agent learning. Our code and experimental data can be found in https://github.com/liyheng/MAGPO.
Similar Papers
Centralized Permutation Equivariant Policy for Cooperative Multi-Agent Reinforcement Learning
Multiagent Systems
Helps many robots learn to work together better.
GEPO: Group Expectation Policy Optimization for Stable Heterogeneous Reinforcement Learning
Machine Learning (CS)
Trains smart computer programs far apart.
Using a single actor to output personalized policy for different intersections
Machine Learning (CS)
Makes traffic lights smarter for smoother traffic flow.