MARPO: A Reflective Policy Optimization for Multi Agent Reinforcement Learning
By: Cuiling Wu , Yaozhong Gan , Junliang Xing and more
We propose Multi Agent Reflective Policy Optimization (MARPO) to alleviate the issue of sample inefficiency in multi agent reinforcement learning. MARPO consists of two key components: a reflection mechanism that leverages subsequent trajectories to enhance sample efficiency, and an asymmetric clipping mechanism that is derived from the KL divergence and dynamically adjusts the clipping range to improve training stability. We evaluate MARPO in classic multi agent environments, where it consistently outperforms other methods.
Similar Papers
Reflective Preference Optimization (RPO): Enhancing On-Policy Alignment via Hint-Guided Reflection
Artificial Intelligence
Makes AI better by teaching it to fix its own mistakes.
Agentic Reinforced Policy Optimization
Machine Learning (CS)
Teaches AI to use tools better in conversations.
Policy Optimization in Multi-Agent Settings under Partially Observable Environments
Multiagent Systems
Helps robots learn together faster.