Score: 1

CoMAS: Co-Evolving Multi-Agent Systems via Interaction Rewards

Published: October 9, 2025 | arXiv ID: 2510.08529v1

By: Xiangyuan Xue , Yifan Zhou , Guibin Zhang and more

Potential Business Impact:

AI learns better by talking to itself.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Self-evolution is a central research topic in enabling large language model (LLM)-based agents to continually improve their capabilities after pretraining. Recent research has witnessed a transition from reinforcement learning (RL)-free to RL-based methods. Current RL-based methods either rely on dense external reward signals or extract intrinsic reward signals from LLMs themselves. However, these approaches diverge from the self-evolution mechanisms observed in human intelligence, where individuals learn and improve through mutual discussion and collaboration. In this work, we introduce Co-Evolving Multi-Agent Systems (CoMAS), a novel framework that enables agents to improve autonomously by learning from inter-agent interactions without external supervision. CoMAS generates intrinsic rewards from rich discussion dynamics, employs an LLM-as-a-judge mechanism to formulate these rewards, and optimizes each agent's policy through RL, thereby enabling decentralized and scalable co-evolution. Experimental results demonstrate that CoMAS consistently outperforms untrained agents and achieves state-of-the-art performance across most evaluation settings. Ablation studies confirm the necessity of interaction-based reward signals and reveal promising scalability as the number and diversity of agents increase. These findings establish CoMAS as a novel and effective paradigm for self-evolution in LLM-based agents.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Computation and Language