AgentShield: Make MAS more secure and efficient
By: Kaixiang Wang , Zhaojiacheng Zhou , Bunyod Suvonov and more
Potential Business Impact:
Protects smart computer teams from bad guys.
Large Language Model (LLM)-based Multi-Agent Systems (MAS) offer powerful cooperative reasoning but remain vulnerable to adversarial attacks, where compromised agents can undermine the system's overall performance. Existing defenses either depend on single trusted auditors, creating single points of failure, or sacrifice efficiency for robustness. To resolve this tension, we propose \textbf{AgentShield}, a distributed framework for efficient, decentralized auditing. AgentShield introduces a novel three-layer defense: \textbf{(i) Critical Node Auditing} prioritizes high-influence agents via topological analysis; \textbf{(ii) Light Token Auditing} implements a cascade protocol using lightweight sentry models for rapid discriminative verification; and \textbf{(iii) Two-Round Consensus Auditing} triggers heavyweight arbiters only upon uncertainty to ensure global agreement. This principled design optimizes the robustness-efficiency trade-off. Experiments demonstrate that AgentShield achieves a 92.5\% recovery rate and reduces auditing overhead by over 70\% compared to existing methods, maintaining high collaborative accuracy across diverse MAS topologies and adversarial scenarios.
Similar Papers
Decentralized Multi-Agent System with Trust-Aware Communication
Multiagent Systems
Builds safer, smarter robot teams that can't be stopped.
Monitoring LLM-based Multi-Agent Systems Against Corruptions via Node Evaluation
Cryptography and Security
Protects smart AI teams from bad communication.
Sentinel Agents for Secure and Trustworthy Agentic AI in Multi-Agent Systems
Artificial Intelligence
Protects smart systems from bad actors.