Can an Individual Manipulate the Collective Decisions of Multi-Agents?
By: Fengyuan Liu , Rui Zhao , Shuo Chen and more
Potential Business Impact:
Tricks AI teams by fooling just one AI.
Individual Large Language Models (LLMs) have demonstrated significant capabilities across various domains, such as healthcare and law. Recent studies also show that coordinated multi-agent systems exhibit enhanced decision-making and reasoning abilities through collaboration. However, due to the vulnerabilities of individual LLMs and the difficulty of accessing all agents in a multi-agent system, a key question arises: If attackers only know one agent, could they still generate adversarial samples capable of misleading the collective decision? To explore this question, we formulate it as a game with incomplete information, where attackers know only one target agent and lack knowledge of the other agents in the system. With this formulation, we propose M-Spoiler, a framework that simulates agent interactions within a multi-agent system to generate adversarial samples. These samples are then used to manipulate the target agent in the target system, misleading the system's collaborative decision-making process. More specifically, M-Spoiler introduces a stubborn agent that actively aids in optimizing adversarial samples by simulating potential stubborn responses from agents in the target system. This enhances the effectiveness of the generated adversarial samples in misleading the system. Through extensive experiments across various tasks, our findings confirm the risks posed by the knowledge of an individual agent in multi-agent systems and demonstrate the effectiveness of our framework. We also explore several defense mechanisms, showing that our proposed attack framework remains more potent than baselines, underscoring the need for further research into defensive strategies.
Similar Papers
Demonstrations of Integrity Attacks in Multi-Agent Systems
Computation and Language
Protects smart teams from sneaky computer tricks.
Multi-Agent Systems Execute Arbitrary Malicious Code
Cryptography and Security
Makes AI assistants unsafe from bad internet stuff.
Simulating Misinformation Vulnerabilities With Agent Personas
Social and Information Networks
Lets computers learn how people believe fake news.