Score: 0

Convergence of Outputs When Two Large Language Models Interact in a Multi-Agentic Setup

Published: December 6, 2025 | arXiv ID: 2512.06256v1

By: Aniruddha Maiti , Satya Nimmagadda , Kartha Veerya Jammuladinne and more

Potential Business Impact:

Computers talking to each other get stuck repeating words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In this work, we report what happens when two large language models respond to each other for many turns without any outside input in a multi-agent setup. The setup begins with a short seed sentence. After that, each model reads the other's output and generates a response. This continues for a fixed number of steps. We used Mistral Nemo Base 2407 and Llama 2 13B hf. We observed that most conversations start coherently but later fall into repetition. In many runs, a short phrase appears and repeats across turns. Once repetition begins, both models tend to produce similar output rather than introducing a new direction in the conversation. This leads to a loop where the same or similar text is produced repeatedly. We describe this behavior as a form of convergence. It occurs even though the models are large, trained separately, and not given any prompt instructions. To study this behavior, we apply lexical and embedding-based metrics to measure how far the conversation drifts from the initial seed and how similar the outputs of the two models becomes as the conversation progresses.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Computation and Language