How to Train a Leader: Hierarchical Reasoning in Multi-Agent LLMs
By: Andrew Estornell , Jean-Francois Ton , Muhammad Faaiz Taufiq and more
Potential Business Impact:
Trains one smart AI to lead others.
Large Language Models (LLMs) have achieved strong performance on a wide range of complex reasoning tasks, yet further gains are often possible by leveraging the complementary strengths of multiple models. While multi-agent frameworks can improve solution quality by leveraging multiple LLMs, existing methods are often computationally expensive, both at training and inference time. In this work, we introduce a hierarchical multi-agent framework that addresses these challenges by training only a single leader LLM to coordinate a team of untrained peer agents. To this end, we propose Multi-agent guided Leader Policy \textbf{O}ptimization (MLPO), a novel approach which trains the leader to evaluate and synthesize agent responses without auxiliary value networks or explicit agent feedback. Leaders trained with MLPO exhibit improved performance not only when interacting with the agent team at inference time, but also enjoy improved performance when deployed in single-agent settings without the team. Empirical results on Big-Bench Hard (BBH), MATH, and MMLU demonstrate that our framework achieves substantial performance improvements over both single-agent and multi-agent baselines. Our results highlight the effectiveness and efficiency of training a single, flexible leader for collaborative reasoning in multi-agent LLM systems.
Similar Papers
Multi-Agent Tool-Integrated Policy Optimization
Computation and Language
Helps AI agents work together to solve harder problems.
Agent-as-Tool: A Study on the Hierarchical Decision Making with Reinforcement Learning
Artificial Intelligence
AI learns better by splitting tasks.
Heterogeneous Group-Based Reinforcement Learning for LLM-based Multi-Agent Systems
Machine Learning (CS)
Teaches AI groups to work better, faster.