MultiAgentBench: Evaluating the Collaboration and Competition of LLM agents
By: Kunlun Zhu , Hongyi Du , Zhaochen Hong and more
Potential Business Impact:
Tests how AI teams work together to solve problems.
Large Language Models (LLMs) have shown remarkable capabilities as autonomous agents, yet existing benchmarks either focus on single-agent tasks or are confined to narrow domains, failing to capture the dynamics of multi-agent coordination and competition. In this paper, we introduce MultiAgentBench, a comprehensive benchmark designed to evaluate LLM-based multi-agent systems across diverse, interactive scenarios. Our framework measures not only task completion but also the quality of collaboration and competition using novel, milestone-based key performance indicators. Moreover, we evaluate various coordination protocols (including star, chain, tree, and graph topologies) and innovative strategies such as group discussion and cognitive planning. Notably, gpt-4o-mini reaches the average highest task score, graph structure performs the best among coordination protocols in the research scenario, and cognitive planning improves milestone achievement rates by 3%. Code and datasets are public available at https://github.com/MultiagentBench/MARBLE.
Similar Papers
Multi-Mission Tool Bench: Assessing the Robustness of LLM based Agents through Related and Dynamic Missions
Artificial Intelligence
Tests AI that handles many jobs at once.
Benchmarking LLMs' Swarm intelligence
Multiagent Systems
Tests if AI can work together like a swarm.
CO-Bench: Benchmarking Language Model Agents in Algorithm Search for Combinatorial Optimization
Computation and Language
Helps computers solve tricky planning problems better.