R2BC: Multi-Agent Imitation Learning from Single-Agent Demonstrations
By: Connor Mattson , Varun Raveendra , Ellen Novoseller and more
Potential Business Impact:
Teaches robot teams by showing one robot at a time.
Imitation Learning (IL) is a natural way for humans to teach robots, particularly when high-quality demonstrations are easy to obtain. While IL has been widely applied to single-robot settings, relatively few studies have addressed the extension of these methods to multi-agent systems, especially in settings where a single human must provide demonstrations to a team of collaborating robots. In this paper, we introduce and study Round-Robin Behavior Cloning (R2BC), a method that enables a single human operator to effectively train multi-robot systems through sequential, single-agent demonstrations. Our approach allows the human to teleoperate one agent at a time and incrementally teach multi-agent behavior to the entire system, without requiring demonstrations in the joint multi-agent action space. We show that R2BC methods match, and in some cases surpass, the performance of an oracle behavior cloning approach trained on privileged synchronized demonstrations across four multi-agent simulated tasks. Finally, we deploy R2BC on two physical robot tasks trained using real human demonstrations.
Similar Papers
Counterfactual Behavior Cloning: Offline Imitation Learning from Imperfect Human Demonstrations
Robotics
Robots learn better from human mistakes.
LLM-based Interactive Imitation Learning for Robotic Manipulation
Robotics
Teaches robots using AI, not people.
Ratatouille: Imitation Learning Ingredients for Real-world Social Robot Navigation
Robotics
Robots learn to walk safely around people.