Score: 0

Beyond Single-Agent Safety: A Taxonomy of Risks in LLM-to-LLM Interactions

Published: December 2, 2025 | arXiv ID: 2512.02682v1

By: Piercosma Bisconti , Marcello Galisai , Federico Pierucci and more

Potential Business Impact:

Keeps AI groups from making bad choices together.

Business Areas:
Simulation Software

This paper examines why safety mechanisms designed for human-model interaction do not scale to environments where large language models (LLMs) interact with each other. Most current governance practices still rely on single-agent safety containment, prompts, fine-tuning, and moderation layers that constrain individual model behavior but leave the dynamics of multi-model interaction ungoverned. These mechanisms assume a dyadic setting: one model responding to one user under stable oversight. Yet research and industrial development are rapidly shifting toward LLM-to-LLM ecosystems, where outputs are recursively reused as inputs across chains of agents. In such systems, local compliance can aggregate into collective failure even when every model is individually aligned. We propose a conceptual transition from model-level safety to system-level safety, introducing the framework of the Emergent Systemic Risk Horizon (ESRH) to formalize how instability arises from interaction structure rather than from isolated misbehavior. The paper contributes (i) a theoretical account of collective risk in interacting LLMs, (ii) a taxonomy connecting micro, meso, and macro-level failure modes, and (iii) a design proposal for InstitutionalAI, an architecture for embedding adaptive oversight within multi-agent systems.

Page Count
13 pages

Category
Computer Science:
Multiagent Systems