Towards Foundation Models with Native Multi-Agent Intelligence
By: Shuyue Hu , Haoyang Yan , Yiqun Zhang and more
Potential Business Impact:
Teaches AI to work together like a team.
Foundation models (FMs) are increasingly assuming the role of the "brain" of AI agents. While recent efforts have begun to equip FMs with native single-agent abilities -- such as GUI interaction or integrated tool use -- we argue that the next frontier is endowing FMs with native multi-agent intelligence. We identify four core capabilities of FMs in multi-agent contexts: understanding, planning, efficient communication, and adaptation. Contrary to assumptions about the spontaneous emergence of such abilities, we provide extensive empirical evidence across 41 large language models showing that strong single-agent performance alone does not automatically yield robust multi-agent intelligence. To address this gap, we outline key research directions -- spanning dataset construction, evaluation, training paradigms, and safety considerations -- for building FMs with native multi-agent intelligence.
Similar Papers
Adaptive and Resource-efficient Agentic AI Systems for Mobile and Embedded Devices: A Survey
Machine Learning (CS)
Makes smart robots learn and act anywhere.
Intilligence Foundation Model: A New Perspective to Approach Artificial General Intelligence
Artificial Intelligence
Builds smarter computers that learn like brains.
The FM Agent
Artificial Intelligence
AI finds new science answers by itself.