Towards Language-Augmented Multi-Agent Deep Reinforcement Learning
By: Maxime Toquebiau , Jae-Yun Jun , Faïz Benamar and more
Potential Business Impact:
Teaches robots to talk and work together.
Most prior works on communication in multi-agent reinforcement learning have focused on emergent communication, which often results in inefficient and non-interpretable systems. Inspired by the role of language in natural intelligence, we investigate how grounding agents in a human-defined language can improve the learning and coordination of embodied agents. We propose a framework in which agents are trained not only to act but also to produce and interpret natural language descriptions of their observations. This language-augmented learning serves a dual role: enabling efficient and interpretable communication between agents, and guiding representation learning. We demonstrate that language-augmented agents outperform emergent communication baselines across various tasks. Our analysis reveals that language grounding leads to more informative internal representations, better generalization to new partners, and improved capability for human-agent interaction. These findings demonstrate the effectiveness of integrating structured language into multi-agent learning and open avenues for more interpretable and capable multi-agent systems.
Similar Papers
Grounding Natural Language for Multi-agent Decision-Making with Multi-agentic LLMs
Artificial Intelligence
Lets AI teams work together to solve problems.
Why do AI agents communicate in human language?
Artificial Intelligence
AI agents talk better using math, not words.
Grounding Multimodal LLMs to Embodied Agents that Ask for Help with Reinforcement Learning
Artificial Intelligence
Robots learn to ask questions to do jobs better.