Large Language Models Miss the Multi-Agent Mark
By: Emanuele La Malfa , Gabriele La Malfa , Samuele Marro and more
Potential Business Impact:
Makes AI teams work together like real teams.
Recent interest in Multi-Agent Systems of Large Language Models (MAS LLMs) has led to an increase in frameworks leveraging multiple LLMs to tackle complex tasks. However, much of this literature appropriates the terminology of MAS without engaging with its foundational principles. In this position paper, we highlight critical discrepancies between MAS theory and current MAS LLMs implementations, focusing on four key areas: the social aspect of agency, environment design, coordination and communication protocols, and measuring emergent behaviours. Our position is that many MAS LLMs lack multi-agent characteristics such as autonomy, social interaction, and structured environments, and often rely on oversimplified, LLM-centric architectures. The field may slow down and lose traction by revisiting problems the MAS literature has already addressed. Therefore, we systematically analyse this issue and outline associated research opportunities; we advocate for better integrating established MAS concepts and more precise terminology to avoid mischaracterisation and missed opportunities.
Similar Papers
Beyond Self-Talk: A Communication-Centric Survey of LLM-Based Multi-Agent Systems
Multiagent Systems
Helps AI teams talk and work together better.
Beyond Static Responses: Multi-Agent LLM Systems as a New Paradigm for Social Science Research
Multiagent Systems
Helps computers study how people act together.
Literature Review Of Multi-Agent Debate For Problem-Solving
Multiagent Systems
Multiple AIs working together solve harder problems.