Rethinking Multi-Agent Intelligence Through the Lens of Small-World Networks
By: Boxuan Wang , Zhuoyun Li , Xiaowei Huang and more
Large language models (LLMs) have enabled multi-agent systems (MAS) in which multiple agents argue, critique, and coordinate to solve complex tasks, making communication topology a first-class design choice. Yet most existing LLM-based MAS either adopt fully connected graphs, simple sparse rings, or ad-hoc dynamic selection, with little structural guidance. In this work, we revisit classic theory on small-world (SW) networks and ask: what changes if we treat SW connectivity as a design prior for MAS? We first bridge insights from neuroscience and complex networks to MAS, highlighting how SW structures balance local clustering and long-range integration. Using multi-agent debate (MAD) as a controlled testbed, experiment results show that SW connectivity yields nearly the same accuracy and token cost, while substantially stabilizing consensus trajectories. Building on this, we introduce an uncertainty-guided rewiring scheme for scaling MAS, where long-range shortcuts are added between epistemically divergent agents using LLM-oriented uncertainty signals (e.g., semantic entropy). This yields controllable SW structures that adapt to task difficulty and agent heterogeneity. Finally, we discuss broader implications of SW priors for MAS design, framing them as stabilizers of reasoning, enhancers of robustness, scalable coordinators, and inductive biases for emergent cognitive roles.
Similar Papers
Stochastic Self-Organization in Multi-Agent Systems
Multiagent Systems
Agents learn to talk better for smarter answers.
Decentralized Multi-Agent System with Trust-Aware Communication
Multiagent Systems
Builds safer, smarter robot teams that can't be stopped.
Multi-Agent Collaboration Mechanisms: A Survey of LLMs
Artificial Intelligence
Lets AI groups work together to solve hard problems.