Score: 0

Enhancing Robustness of LLM-Driven Multi-Agent Systems through Randomized Smoothing

Published: July 5, 2025 | arXiv ID: 2507.04105v1

By: Jinwei Hu , Yi Dong , Zhengtao Ding and more

Potential Business Impact:

Keeps AI systems from making dangerous mistakes.

Business Areas:
Simulation Software

This paper presents a defense framework for enhancing the safety of large language model (LLM) empowered multi-agent systems (MAS) in safety-critical domains such as aerospace. We apply randomized smoothing, a statistical robustness certification technique, to the MAS consensus context, enabling probabilistic guarantees on agent decisions under adversarial influence. Unlike traditional verification methods, our approach operates in black-box settings and employs a two-stage adaptive sampling mechanism to balance robustness and computational efficiency. Simulation results demonstrate that our method effectively prevents the propagation of adversarial behaviors and hallucinations while maintaining consensus performance. This work provides a practical and scalable path toward safe deployment of LLM-based MAS in real-world, high-stakes environments.

Page Count
9 pages

Category
Computer Science:
Artificial Intelligence