Strategic Communication and Language Bias in Multi-Agent LLM Coordination
By: Alessio Buscemi , Daniele Proverbio , Alessandro Di Stefano and more
Potential Business Impact:
Lets AI agents work together better by talking.
Large Language Model (LLM)-based agents are increasingly deployed in multi-agent scenarios where coordination is crucial but not always assured. Previous studies indicate that the language used to frame strategic scenarios can influence cooperative behavior. This paper explores whether allowing agents to communicate amplifies these language-driven effects. Leveraging the FAIRGAME framework, we simulate one-shot and repeated games across different languages and models, both with and without communication. Our experiments, conducted with two advanced LLMs, GPT-4o and Llama 4 Maverick, reveal that communication significantly influences agent behavior, though its impact varies by language, personality, and game structure. These findings underscore the dual role of communication in fostering coordination and reinforcing biases.
Similar Papers
Understanding LLM Agent Behaviours via Game Theory: Strategy Recognition, Biases and Multi-Agent Dynamics
Multiagent Systems
AI learns how to play games fairly.
Why do AI agents communicate in human language?
Artificial Intelligence
AI agents talk better using math, not words.
Multi-Agent Language Models: Advancing Cooperation, Coordination, and Adaptation
Computation and Language
Helps AI understand and work with people.