Score: 1

MultiAgentBench: Evaluating the Collaboration and Competition of LLM agents

Published: March 3, 2025 | arXiv ID: 2503.01935v1

By: Kunlun Zhu , Hongyi Du , Zhaochen Hong and more

Potential Business Impact:

Tests how AI teams work together to solve problems.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) have shown remarkable capabilities as autonomous agents, yet existing benchmarks either focus on single-agent tasks or are confined to narrow domains, failing to capture the dynamics of multi-agent coordination and competition. In this paper, we introduce MultiAgentBench, a comprehensive benchmark designed to evaluate LLM-based multi-agent systems across diverse, interactive scenarios. Our framework measures not only task completion but also the quality of collaboration and competition using novel, milestone-based key performance indicators. Moreover, we evaluate various coordination protocols (including star, chain, tree, and graph topologies) and innovative strategies such as group discussion and cognitive planning. Notably, gpt-4o-mini reaches the average highest task score, graph structure performs the best among coordination protocols in the research scenario, and cognitive planning improves milestone achievement rates by 3%. Code and datasets are public available at https://github.com/MultiagentBench/MARBLE.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
42 pages

Category
Computer Science:
Multiagent Systems