Demonstrations of Integrity Attacks in Multi-Agent Systems
By: Can Zheng , Yuhan Cao , Xiaoning Dong and more
Potential Business Impact:
Protects smart teams from sneaky computer tricks.
Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language understanding, code generation, and complex planning. Simultaneously, Multi-Agent Systems (MAS) have garnered attention for their potential to enable cooperation among distributed agents. However, from a multi-party perspective, MAS could be vulnerable to malicious agents that exploit the system to serve self-interests without disrupting its core functionality. This work explores integrity attacks where malicious agents employ subtle prompt manipulation to bias MAS operations and gain various benefits. Four types of attacks are examined: \textit{Scapegoater}, who misleads the system monitor to underestimate other agents' contributions; \textit{Boaster}, who misleads the system monitor to overestimate their own performance; \textit{Self-Dealer}, who manipulates other agents to adopt certain tools; and \textit{Free-Rider}, who hands off its own task to others. We demonstrate that strategically crafted prompts can introduce systematic biases in MAS behavior and executable instructions, enabling malicious agents to effectively mislead evaluation systems and manipulate collaborative agents. Furthermore, our attacks can bypass advanced LLM-based monitors, such as GPT-4o-mini and o3-mini, highlighting the limitations of current detection mechanisms. Our findings underscore the critical need for MAS architectures with robust security protocols and content validation mechanisms, alongside monitoring systems capable of comprehensive risk scenario assessment.
Similar Papers
The Dark Side of LLMs: Agent-based Attacks for Complete Computer Takeover
Cryptography and Security
AI can be tricked into installing computer viruses.
Shadows in the Code: Exploring the Risks and Defenses of LLM-based Multi-Agent Software Development Systems
Cryptography and Security
Makes AI-built apps safer from hidden computer tricks.
Can an Individual Manipulate the Collective Decisions of Multi-Agents?
Computation and Language
Tricks AI teams by fooling just one AI.