MCPAgentBench: A Real-world Task Benchmark for Evaluating LLM Agent MCP Tool Use
By: Wenrui Liu , Zixiang Liu , Elsie Dai and more
Large Language Models (LLMs) are increasingly serving as autonomous agents, and their utilization of external tools via the Model Context Protocol (MCP) is considered a future trend. Current MCP evaluation sets suffer from issues such as reliance on external MCP services and a lack of difficulty awareness. To address these limitations, we propose MCPAgentBench, a benchmark based on real-world MCP definitions designed to evaluate the tool-use capabilities of agents. We construct a dataset containing authentic tasks and simulated MCP tools. The evaluation employs a dynamic sandbox environment that presents agents with candidate tool lists containing distractors, thereby testing their tool selection and discrimination abilities. Furthermore, we introduce comprehensive metrics to measure both task completion rates and execution efficiency. Experiments conducted on various latest mainstream Large Language Models reveal significant performance differences in handling complex, multi-step tool invocations. All code is open-source at Github.
Similar Papers
MCP-Bench: Benchmarking Tool-Using LLM Agents with Complex Real-World Tasks via MCP Servers
Computation and Language
Tests AI's ability to use many tools together.
MCPToolBench++: A Large Scale AI Agent Model Context Protocol MCP Tool Use Benchmark
Artificial Intelligence
Tests how well AI uses real-world tools.
MCP-AgentBench: Evaluating Real-World Language Agent Performance with MCP-Mediated Tools
Computation and Language
Tests AI helpers using tools better.