Score: 1

Strategic Communication and Language Bias in Multi-Agent LLM Coordination

Published: July 30, 2025 | arXiv ID: 2508.00032v1

By: Alessio Buscemi , Daniele Proverbio , Alessandro Di Stefano and more

Potential Business Impact:

Lets AI agents work together better by talking.

Large Language Model (LLM)-based agents are increasingly deployed in multi-agent scenarios where coordination is crucial but not always assured. Previous studies indicate that the language used to frame strategic scenarios can influence cooperative behavior. This paper explores whether allowing agents to communicate amplifies these language-driven effects. Leveraging the FAIRGAME framework, we simulate one-shot and repeated games across different languages and models, both with and without communication. Our experiments, conducted with two advanced LLMs, GPT-4o and Llama 4 Maverick, reveal that communication significantly influences agent behavior, though its impact varies by language, personality, and game structure. These findings underscore the dual role of communication in fostering coordination and reinforcing biases.

Country of Origin
🇮🇹 🇬🇧 Italy, United Kingdom

Page Count
13 pages

Category
Computer Science:
Multiagent Systems