Securing the Model Context Protocol: Defending LLMs Against Tool Poisoning and Adversarial Attacks
By: Saeid Jamshidi , Kawser Wazed Nafi , Arghavan Moradi Dakhel and more
Potential Business Impact:
Secures AI tools from hidden, dangerous instructions.
The Model Context Protocol (MCP) enables Large Language Models to integrate external tools through structured descriptors, increasing autonomy in decision-making, task execution, and multi-agent workflows. However, this autonomy creates a largely overlooked security gap. Existing defenses focus on prompt-injection attacks and fail to address threats embedded in tool metadata, leaving MCP-based systems exposed to semantic manipulation. This work analyzes three classes of semantic attacks on MCP-integrated systems: (1) Tool Poisoning, where adversarial instructions are hidden in tool descriptors; (2) Shadowing, where trusted tools are indirectly compromised through contaminated shared context; and (3) Rug Pulls, where descriptors are altered after approval to subvert behavior. To counter these threats, we introduce a layered security framework with three components: RSA-based manifest signing to enforce descriptor integrity, LLM-on-LLM semantic vetting to detect suspicious tool definitions, and lightweight heuristic guardrails that block anomalous tool behavior at runtime. Through evaluation of GPT-4, DeepSeek, and Llama-3.5 across eight prompting strategies, we find that security performance varies widely by model architecture and reasoning method. GPT-4 blocks about 71 percent of unsafe tool calls, balancing latency and safety. DeepSeek shows the highest resilience to Shadowing attacks but with greater latency, while Llama-3.5 is fastest but least robust. Our results show that the proposed framework reduces unsafe tool invocation rates without model fine-tuning or internal modification.
Similar Papers
MCP-Guard: A Defense Framework for Model Context Protocol Integrity in Large Language Model Applications
Cryptography and Security
Protects smart computer helpers from being tricked.
MCP-Guard: A Defense Framework for Model Context Protocol Integrity in Large Language Model Applications
Cryptography and Security
Protects smart AI from being tricked.
Systematic Analysis of MCP Security
Cryptography and Security
Finds ways AI can be tricked by tools.