Multi-Agent Evolve: LLM Self-Improve through Co-evolution
By: Yixing Chen , Yiding Wang , Siqi Zhu and more
Potential Business Impact:
Helps computers learn to solve problems better alone.
Reinforcement Learning (RL) has demonstrated significant potential in enhancing the reasoning capabilities of large language models (LLMs). However, the success of RL for LLMs heavily relies on human-curated datasets and verifiable rewards, which limit their scalability and generality. Recent Self-Play RL methods, inspired by the success of the paradigm in games and Go, aim to enhance LLM reasoning capabilities without human-annotated data. However, their methods primarily depend on a grounded environment for feedback (e.g., a Python interpreter or a game engine); extending them to general domains remains challenging. To address these challenges, we propose Multi-Agent Evolve (MAE), a framework that enables LLMs to self-evolve in solving diverse tasks, including mathematics, reasoning, and general knowledge Q&A. The core design of MAE is based on a triplet of interacting agents (Proposer, Solver, Judge) that are instantiated from a single LLM, and applies reinforcement learning to optimize their behaviors. The Proposer generates questions, the Solver attempts solutions, and the Judge evaluates both while co-evolving. Experiments on Qwen2.5-3B-Instruct demonstrate that MAE achieves an average improvement of 4.54% on multiple benchmarks. These results highlight MAE as a scalable, data-efficient method for enhancing the general reasoning abilities of LLMs with minimal reliance on human-curated supervision.
Similar Papers
Multi-Agent Evolve: LLM Self-Improve through Co-evolution
Artificial Intelligence
Helps computers learn to solve problems better alone.
AgentEvolver: Towards Efficient Self-Evolving Agent System
Machine Learning (CS)
Teaches AI to learn tasks faster and cheaper.
CoMAS: Co-Evolving Multi-Agent Systems via Interaction Rewards
Computation and Language
AI learns better by talking to itself.