AutoMaAS: Self-Evolving Multi-Agent Architecture Search for Large Language Models
By: Bo Ma , Hang Li , ZeHua Hu and more
Potential Business Impact:
Builds smarter AI teams that work better and cheaper.
Multi-agent systems powered by large language models have demonstrated remarkable capabilities across diverse domains, yet existing automated design approaches seek monolithic solutions that fail to adapt resource allocation based on query complexity and domain requirements. This paper introduces AutoMaAS, a self-evolving multi-agent architecture search framework that leverages neural architecture search principles to automatically discover optimal agent configurations through dynamic operator lifecycle management and automated machine learning techniques. Our approach incorporates four key innovations: (1) automatic operator generation, fusion, and elimination based on performance-cost analysis, (2) dynamic cost-aware optimization with real-time parameter adjustment, (3) online feedback integration for continuous architecture refinement, and (4) enhanced interpretability through decision tracing mechanisms. Extensive experiments across six benchmarks demonstrate that AutoMaAS achieves 1.0-7.1\% performance improvement while reducing inference costs by 3-5\% compared to state-of-the-art methods. The framework shows superior transferability across datasets and LLM backbones, establishing a new paradigm for automated multi-agent system design in the era of large language models.
Similar Papers
MASA: LLM-Driven Multi-Agent Systems for Autoformalization
Computation and Language
Helps computers turn words into math rules.
MAS$^2$: Self-Generative, Self-Configuring, Self-Rectifying Multi-Agent Systems
Multiagent Systems
Systems build better systems that solve harder problems.
Automated Design Optimization via Strategic Search with Large Language Models
Machine Learning (CS)
Helps computers design better code faster and cheaper.