Language-Guided Multi-Agent Learning in Simulations: A Unified Framework and Evaluation
By: Zhengyang Li
Potential Business Impact:
Helps AI teams work together better in games.
This paper introduces LLM-MARL, a unified framework that incorporates large language models (LLMs) into multi-agent reinforcement learning (MARL) to enhance coordination, communication, and generalization in simulated game environments. The framework features three modular components of Coordinator, Communicator, and Memory, which dynamically generate subgoals, facilitate symbolic inter-agent messaging, and support episodic recall. Training combines PPO with a language-conditioned loss and LLM query gating. LLM-MARL is evaluated in Google Research Football, MAgent Battle, and StarCraft II. Results show consistent improvements over MAPPO and QMIX in win rate, coordination score, and zero-shot generalization. Ablation studies demonstrate that subgoal generation and language-based messaging each contribute significantly to performance gains. Qualitative analysis reveals emergent behaviors such as role specialization and communication-driven tactics. By bridging language modeling and policy learning, this work contributes to the design of intelligent, cooperative agents in interactive simulations. It offers a path forward for leveraging LLMs in multi-agent systems used for training, games, and human-AI collaboration.
Similar Papers
Enhancing Multi-Agent Systems via Reinforcement Learning with LLM-based Planner and Graph-based Policy
CV and Pattern Recognition
Helps robots work together on hard jobs.
LAMARL: LLM-Aided Multi-Agent Reinforcement Learning for Cooperative Policy Generation
Robotics
Robots learn tasks faster with AI help.
LLM-Mediated Guidance of MARL Systems
Multiagent Systems
AI learns faster with smart advice.