LAMARL: LLM-Aided Multi-Agent Reinforcement Learning for Cooperative Policy Generation
By: Guobin Zhu , Rui Zhou , Wenkang Ji and more
Potential Business Impact:
Robots learn tasks faster with AI help.
Although Multi-Agent Reinforcement Learning (MARL) is effective for complex multi-robot tasks, it suffers from low sample efficiency and requires iterative manual reward tuning. Large Language Models (LLMs) have shown promise in single-robot settings, but their application in multi-robot systems remains largely unexplored. This paper introduces a novel LLM-Aided MARL (LAMARL) approach, which integrates MARL with LLMs, significantly enhancing sample efficiency without requiring manual design. LAMARL consists of two modules: the first module leverages LLMs to fully automate the generation of prior policy and reward functions. The second module is MARL, which uses the generated functions to guide robot policy training effectively. On a shape assembly benchmark, both simulation and real-world experiments demonstrate the unique advantages of LAMARL. Ablation studies show that the prior policy improves sample efficiency by an average of 185.9% and enhances task completion, while structured prompts based on Chain-of-Thought (CoT) and basic APIs improve LLM output success rates by 28.5%-67.5%. Videos and code are available at https://windylab.github.io/LAMARL/
Similar Papers
Enhancing Multi-Agent Systems via Reinforcement Learning with LLM-based Planner and Graph-based Policy
CV and Pattern Recognition
Helps robots work together on hard jobs.
Language-Guided Multi-Agent Learning in Simulations: A Unified Framework and Evaluation
Artificial Intelligence
Helps AI teams work together better in games.
LLM Collaboration With Multi-Agent Reinforcement Learning
Artificial Intelligence
Helps AI agents work together to write and code.