Shadows in the Code: Exploring the Risks and Defenses of LLM-based Multi-Agent Software Development Systems
By: Xiaoqing Wang , Keman Huang , Bin Liang and more
Potential Business Impact:
Makes AI-built apps safer from hidden computer tricks.
The rapid advancement of Large Language Model (LLM)-driven multi-agent systems has significantly streamlined software developing tasks, enabling users with little technical expertise to develop executable applications. While these systems democratize software creation through natural language requirements, they introduce significant security risks that remain largely unexplored. We identify two risky scenarios: Malicious User with Benign Agents (MU-BA) and Benign User with Malicious Agents (BU-MA). We introduce the Implicit Malicious Behavior Injection Attack (IMBIA), demonstrating how multi-agent systems can be manipulated to generate software with concealed malicious capabilities beneath seemingly benign applications, and propose Adv-IMBIA as a defense mechanism. Evaluations across ChatDev, MetaGPT, and AgentVerse frameworks reveal varying vulnerability patterns, with IMBIA achieving attack success rates of 93%, 45%, and 71% in MU-BA scenarios, and 71%, 84%, and 45% in BU-MA scenarios. Our defense mechanism reduced attack success rates significantly, particularly in the MU-BA scenario. Further analysis reveals that compromised agents in the coding and testing phases pose significantly greater security risks, while also identifying critical agents that require protection against malicious user exploitation. Our findings highlight the urgent need for robust security measures in multi-agent software development systems and provide practical guidelines for implementing targeted, resource-efficient defensive strategies.
Similar Papers
The Dark Side of LLMs: Agent-based Attacks for Complete Computer Takeover
Cryptography and Security
AI can be tricked into installing computer viruses.
Toward a Safe Internet of Agents
Multiagent Systems
Makes AI agents safer and more trustworthy.
Demonstrations of Integrity Attacks in Multi-Agent Systems
Computation and Language
Protects smart teams from sneaky computer tricks.