Score: 0

Network and Systems Performance Characterization of MCP-Enabled LLM Agents

Published: October 20, 2025 | arXiv ID: 2511.07426v1

By: Zihao Ding, Mufeng Zhu, Yao Liu

Potential Business Impact:

Makes AI smarter without costing too much.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Model Context Protocol (MCP) has recently gained increased attention within the AI community for providing a standardized way for large language models (LLMs) to interact with external tools and services, significantly enhancing their capabilities. However, the inclusion of extensive contextual information, including system prompts, MCP tool definitions, and context histories, in MCP-enabled LLM interactions, dramatically inflates token usage. Given that LLM providers charge based on tokens, these expanded contexts can quickly escalate monetary costs and increase the computational load on LLM services. This paper presents a comprehensive measurement-based analysis of MCP-enabled interactions with LLMs, revealing trade-offs between capability, performance, and cost. We explore how different LLM models and MCP configurations impact key performance metrics such as token efficiency, monetary cost, task completion times, and task success rates, and suggest potential optimizations, including enabling parallel tool calls and implementing robust task abort mechanisms. These findings provide useful insights for developing more efficient, robust, and cost-effective MCP-enabled workflows.

Country of Origin
🇺🇸 United States

Page Count
12 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing