Collaborating Action by Action: A Multi-agent LLM Framework for Embodied Reasoning
By: Isadora White , Kolby Nottingham , Ayush Maniar and more
Potential Business Impact:
Lets AI characters work together in games.
Collaboration is ubiquitous and essential in day-to-day life -- from exchanging ideas, to delegating tasks, to generating plans together. This work studies how LLMs can adaptively collaborate to perform complex embodied reasoning tasks. To this end we introduce MINDcraft, an easily extensible platform built to enable LLM agents to control characters in the open-world game of Minecraft; and MineCollab, a benchmark to test the different dimensions of embodied and collaborative reasoning. An experimental study finds that the primary bottleneck in collaborating effectively for current state-of-the-art agents is efficient natural language communication, with agent performance dropping as much as 15% when they are required to communicate detailed task completion plans. We conclude that existing LLM agents are ill-optimized for multi-agent collaboration, especially in embodied scenarios, and highlight the need to employ methods beyond in-context and imitation learning. Our website can be found here: https://mindcraft-minecollab.github.io/
Similar Papers
Enhancing Reasoning with Collaboration and Memory
Artificial Intelligence
AI agents learn together, remembering to solve harder problems.
CoBel-World: Harnessing LLM Reasoning to Build a Collaborative Belief World for Optimizing Embodied Multi-Agent Collaboration
Artificial Intelligence
Helps AI teams work together better by guessing what others think.
Multi-Agent Language Models: Advancing Cooperation, Coordination, and Adaptation
Computation and Language
Helps AI understand and work with people.