Zero-Shot Coordination in Ad Hoc Teams with Generalized Policy Improvement and Difference Rewards
By: Rupal Nigam , Niket Parikh , Hamid Osooli and more
Potential Business Impact:
Robots learn to work together instantly with new friends.
Real-world multi-agent systems may require ad hoc teaming, where an agent must coordinate with other previously unseen teammates to solve a task in a zero-shot manner. Prior work often either selects a pretrained policy based on an inferred model of the new teammates or pretrains a single policy that is robust to potential teammates. Instead, we propose to leverage all pretrained policies in a zero-shot transfer setting. We formalize this problem as an ad hoc multi-agent Markov decision process and present a solution that uses two key ideas, generalized policy improvement and difference rewards, for efficient and effective knowledge transfer between different teams. We empirically demonstrate that our algorithm, Generalized Policy improvement for Ad hoc Teaming (GPAT), successfully enables zero-shot transfer to new teams in three simulated environments: cooperative foraging, predator-prey, and Overcooked. We also demonstrate our algorithm in a real-world multi-robot setting.
Similar Papers
Generic-to-Specific Reasoning and Learning for Scalable Ad Hoc Teamwork
Artificial Intelligence
Helps robots work together without knowing each other.
PADiff: Predictive and Adaptive Diffusion Policies for Ad Hoc Teamwork
Artificial Intelligence
Helps robots learn to work together instantly.
Zero-Shot Action Generalization with Limited Observations
Machine Learning (CS)
Teaches robots new actions with few examples.