Score: 0

Specification and Evaluation of Multi-Agent LLM Systems -- Prototype and Cybersecurity Applications

Published: June 12, 2025 | arXiv ID: 2506.10467v4

By: Felix Härer

Potential Business Impact:

Lets AI agents solve hard computer security problems.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Recent advancements in LLMs indicate potential for novel applications, as evidenced by the reasoning capabilities in the latest OpenAI and DeepSeek models. To apply these models to domain-specific applications beyond text generation, LLM-based multi-agent systems can be utilized to solve complex tasks, particularly by combining reasoning techniques, code generation, and software execution across multiple, potentially specialized LLMs. However, while many evaluations are performed on LLMs, reasoning techniques, and applications individually, their joint specification and combined application are not well understood. Defined specifications for multi-agent LLM systems are required to explore their potential and suitability for specific applications, allowing for systematic evaluations of LLMs, reasoning techniques, and related aspects. This paper reports the results of exploratory research on (1.) multi-agent specification by introducing an agent schema language and (2.) the execution and evaluation of the specifications through a multi-agent system architecture and prototype. The specification language, system architecture, and prototype are first presented in this work, building on an LLM system from prior research. Test cases involving cybersecurity tasks indicate the feasibility of the architecture and evaluation approach. As a result, evaluations could be demonstrated for question answering, server security, and network security tasks completed correctly by agents with LLMs from OpenAI and DeepSeek.

Country of Origin
🇨🇭 Switzerland

Page Count
8 pages

Category
Computer Science:
Cryptography and Security