Systematization of Knowledge: Security and Safety in the Model Context Protocol Ecosystem
By: Shiva Gaire , Srijan Gyawali , Saroj Mishra and more
The Model Context Protocol (MCP) has emerged as the de facto standard for connecting Large Language Models (LLMs) to external data and tools, effectively functioning as the "USB-C for Agentic AI." While this decoupling of context and execution solves critical interoperability challenges, it introduces a profound new threat landscape where the boundary between epistemic errors (hallucinations) and security breaches (unauthorized actions) dissolves. This Systematization of Knowledge (SoK) aims to provide a comprehensive taxonomy of risks in the MCP ecosystem, distinguishing between adversarial security threats (e.g., indirect prompt injection, tool poisoning) and epistemic safety hazards (e.g., alignment failures in distributed tool delegation). We analyze the structural vulnerabilities of MCP primitives, specifically Resources, Prompts, and Tools, and demonstrate how "context" can be weaponized to trigger unauthorized operations in multi-agent environments. Furthermore, we survey state-of-the-art defenses, ranging from cryptographic provenance (ETDI) to runtime intent verification, and conclude with a roadmap for securing the transition from conversational chatbots to autonomous agentic operating systems.
Similar Papers
Securing the Model Context Protocol (MCP): Risks, Controls, and Governance
Cryptography and Security
Secures AI agents from hackers and mistakes.
Toward Understanding Security Issues in the Model Context Protocol Ecosystem
Cryptography and Security
Finds and fixes security flaws in AI tools.
Systematic Analysis of MCP Security
Cryptography and Security
Finds ways AI can be tricked by tools.