Monitoring LLM-based Multi-Agent Systems Against Corruptions via Node Evaluation
By: Chengcan Wu , Zhixin Zhang , Mingqian Xu and more
Potential Business Impact:
Protects smart AI teams from bad communication.
Large Language Model (LLM)-based Multi-Agent Systems (MAS) have become a popular paradigm of AI applications. However, trustworthiness issues in MAS remain a critical concern. Unlike challenges in single-agent systems, MAS involve more complex communication processes, making them susceptible to corruption attacks. To mitigate this issue, several defense mechanisms have been developed based on the graph representation of MAS, where agents represent nodes and communications form edges. Nevertheless, these methods predominantly focus on static graph defense, attempting to either detect attacks in a fixed graph structure or optimize a static topology with certain defensive capabilities. To address this limitation, we propose a dynamic defense paradigm for MAS graph structures, which continuously monitors communication within the MAS graph, then dynamically adjusts the graph topology, accurately disrupts malicious communications, and effectively defends against evolving and diverse dynamic attacks. Experimental results in increasingly complex and dynamic MAS environments demonstrate that our method significantly outperforms existing MAS defense mechanisms, contributing an effective guardrail for their trustworthy applications. Our code is available at https://github.com/ChengcanWu/Monitoring-LLM-Based-Multi-Agent-Systems.
Similar Papers
Decentralized Multi-Agent System with Trust-Aware Communication
Multiagent Systems
Builds safer, smarter robot teams that can't be stopped.
AgentShield: Make MAS more secure and efficient
Multiagent Systems
Protects smart computer teams from bad guys.
Measuring the Security of Mobile LLM Agents under Adversarial Prompts from Untrusted Third-Party Channels
Cryptography and Security
Finds ways bad apps trick phone AI.