MCP-Guard: A Defense Framework for Model Context Protocol Integrity in Large Language Model Applications
By: Wenpeng Xing , Zhonghao Qi , Yupeng Qin and more
Potential Business Impact:
Protects smart computer helpers from being tricked.
The integration of Large Language Models (LLMs) with external tools via protocols such as the Model Context Protocol (MCP) introduces critical security vulnerabilities, including prompt injection, data exfiltration, and other threats. To counter these challenges, we propose MCP-Guard, a robust, layered defense architecture designed for LLM--tool interactions. MCP-Guard employs a three-stage detection pipeline that balances efficiency with accuracy: it progresses from lightweight static scanning for overt threats and a deep neural detector for semantic attacks, to our fine-tuned E5-based model achieves (96.01) accuracy in identifying adversarial prompts. Finally, a lightweight LLM arbitrator synthesizes these signals to deliver the final decision while minimizing false positives. To facilitate rigorous training and evaluation, we also introduce MCP-AttackBench, a comprehensive benchmark of over 70,000 samples. Sourced from public datasets and augmented by GPT-4, MCP-AttackBench simulates diverse, real-world attack vectors in the MCP format, providing a foundation for future research into securing LLM-tool ecosystems.
Similar Papers
MCP-Guard: A Defense Framework for Model Context Protocol Integrity in Large Language Model Applications
Cryptography and Security
Protects smart AI from being tricked.
Securing the Model Context Protocol: Defending LLMs Against Tool Poisoning and Adversarial Attacks
Cryptography and Security
Secures AI tools from hidden, dangerous instructions.
MCPGuard : Automatically Detecting Vulnerabilities in MCP Servers
Cryptography and Security
Fixes security holes in smart AI tools.