CCL: Collaborative Curriculum Learning for Sparse-Reward Multi-Agent Reinforcement Learning via Co-evolutionary Task Evolution
By: Yufei Lin , Chengwei Ye , Huanzhen Zhang and more
Potential Business Impact:
Teaches robots to work together better.
Sparse reward environments pose significant challenges in reinforcement learning, especially within multi-agent systems (MAS) where feedback is delayed and shared across agents, leading to suboptimal learning. We propose Collaborative Multi-dimensional Course Learning (CCL), a novel curriculum learning framework that addresses this by (1) refining intermediate tasks for individual agents, (2) using a variational evolutionary algorithm to generate informative subtasks, and (3) co-evolving agents with their environment to enhance training stability. Experiments on five cooperative tasks in the MPE and Hide-and-Seek environments show that CCL outperforms existing methods in sparse reward settings.
Similar Papers
Strategic Coordination for Evolving Multi-agent Systems: A Hierarchical Reinforcement and Collective Learning Approach
Multiagent Systems
Helps robots work together better and smarter.
Advancing CMA-ES with Learning-Based Cooperative Coevolution for Scalable Optimization
Machine Learning (CS)
Teaches computers to solve hard problems faster.
Causally Aligned Curriculum Learning
Machine Learning (CS)
Teaches robots to learn faster with tricky problems.