SentinelAgent: Graph-based Anomaly Detection in Multi-Agent Systems
By: Xu He , Di Wu , Yan Zhai and more
Potential Business Impact:
Keeps AI teams from making mistakes or being tricked.
The rise of large language model (LLM)-based multi-agent systems (MAS) introduces new security and reliability challenges. While these systems show great promise in decomposing and coordinating complex tasks, they also face multi-faceted risks across prompt manipulation, unsafe tool usage, and emergent agent miscoordination. Existing guardrail mechanisms offer only partial protection, primarily at the input-output level, and fall short in addressing systemic or multi-point failures in MAS. In this work, we present a system-level anomaly detection framework tailored for MAS, integrating structural modeling with runtime behavioral oversight. Our approach consists of two components. First, we propose a graph-based framework that models agent interactions as dynamic execution graphs, enabling semantic anomaly detection at node, edge, and path levels. Second, we introduce a pluggable SentinelAgent, an LLM-powered oversight agent that observes, analyzes, and intervenes in MAS execution based on security policies and contextual reasoning. By bridging abstract detection logic with actionable enforcement, our method detects not only single-point faults and prompt injections but also multi-agent collusion and latent exploit paths. We validate our framework through two case studies, including an email assistant and Microsoft's Magentic-One system, demonstrating its ability to detect covert risks and provide explainable root-cause attribution. Our work lays the foundation for more trustworthy, monitorable, and secure agent-based AI ecosystems.
Similar Papers
Explainable and Fine-Grained Safeguarding of LLM Multi-Agent Systems via Bi-Level Graph Anomaly Detection
Cryptography and Security
Finds bad AI helpers in group chats.
Sentinel Agents for Secure and Trustworthy Agentic AI in Multi-Agent Systems
Artificial Intelligence
Protects smart systems from bad actors.
Monitoring LLM-based Multi-Agent Systems Against Corruptions via Node Evaluation
Cryptography and Security
Protects smart AI teams from bad communication.