Score: 1

Learning to Orchestrate Agents in Natural Language with the Conductor

Published: December 4, 2025 | arXiv ID: 2512.04388v1

By: Stefan Nielsen , Edoardo Cetin , Peter Schwendeman and more

Potential Business Impact:

Lets AI teams work together better for smarter answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Powerful large language models (LLMs) from different providers have been expensively trained and finetuned to specialize across varying domains. In this work, we introduce a new kind of Conductor model trained with reinforcement learning to automatically discover powerful coordination strategies among LLMs. Our Conductor learns not only to design targeted communication topologies for effective agent-to-agent collaboration, but also to prompt engineer focused instructions to the LLMs to maximally leverage their individual capabilities. We show that, by learning optimal coordination strategies over pools of powerful worker LLMs, a 7B Conductor achieves significant performance gains beyond any individual worker, attaining state-of-the-art results in challenging reasoning benchmarks, such as LiveCodeBench and GPQA. By training with randomized agent pools, our conductor effectively adapts to arbitrary sets of open- and closed-source agents, meeting any user requirements. Furthermore, allowing the Conductor to select itself as a worker gives rise to recursive topologies, elevating performance with a new form of dynamic test-time scaling through online iterative adaptation. More broadly, ours is among the early work demonstrating language model coordination can be unlocked through RL, where powerful coordination strategies emerge naturally in LLMs through pure end-to-end reward maximization.

Page Count
39 pages

Category
Computer Science:
Machine Learning (CS)