Attack the Messages, Not the Agents: A Multi-round Adaptive Stealthy Tampering Framework for LLM-MAS
By: Bingyu Yan , Ziyi Zhou , Xiaoming Zhang and more
Potential Business Impact:
Makes AI talk to each other safely.
Large language model-based multi-agent systems (LLM-MAS) effectively accomplish complex and dynamic tasks through inter-agent communication, but this reliance introduces substantial safety vulnerabilities. Existing attack methods targeting LLM-MAS either compromise agent internals or rely on direct and overt persuasion, which limit their effectiveness, adaptability, and stealthiness. In this paper, we propose MAST, a Multi-round Adaptive Stealthy Tampering framework designed to exploit communication vulnerabilities within the system. MAST integrates Monte Carlo Tree Search with Direct Preference Optimization to train an attack policy model that adaptively generates effective multi-round tampering strategies. Furthermore, to preserve stealthiness, we impose dual semantic and embedding similarity constraints during the tampering process. Comprehensive experiments across diverse tasks, communication architectures, and LLMs demonstrate that MAST consistently achieves high attack success rates while significantly enhancing stealthiness compared to baselines. These findings highlight the effectiveness, stealthiness, and adaptability of MAST, underscoring the need for robust communication safeguards in LLM-MAS.
Similar Papers
TAMAS: Benchmarking Adversarial Risks in Multi-Agent LLM Systems
Multiagent Systems
Tests if AI teams can be tricked.
Decentralized Multi-Agent System with Trust-Aware Communication
Multiagent Systems
Builds safer, smarter robot teams that can't be stopped.
Monitoring LLM-based Multi-Agent Systems Against Corruptions via Node Evaluation
Cryptography and Security
Protects smart AI teams from bad communication.