ReCollab: Retrieval-Augmented LLMs for Cooperative Ad-hoc Teammate Modeling
By: Conor Wallace, Umer Siddique, Yongcan Cao
Potential Business Impact:
Helps robots learn to work together faster.
Ad-hoc teamwork (AHT) requires agents to infer the behavior of previously unseen teammates and adapt their policy accordingly. Conventional approaches often rely on fixed probabilistic models or classifiers, which can be brittle under partial observability and limited interaction. Large language models (LLMs) offer a flexible alternative: by mapping short behavioral traces into high-level hypotheses, they can serve as world models over teammate behavior. We introduce \Collab, a language-based framework that classifies partner types using a behavior rubric derived from trajectory features, and extend it to \ReCollab, which incorporates retrieval-augmented generation (RAG) to stabilize inference with exemplar trajectories. In the cooperative Overcooked environment, \Collab effectively distinguishes teammate types, while \ReCollab consistently improves adaptation across layouts, achieving Pareto-optimal trade-offs between classification accuracy and episodic return. These findings demonstrate the potential of LLMs as behavioral world models for AHT and highlight the importance of retrieval grounding in challenging coordination settings.
Similar Papers
Learning "Partner-Aware" Collaborators in Multi-Party Collaboration
Artificial Intelligence
Teaches AI to work better with people.
Reinforcement Learning-Augmented LLM Agents for Collaborative Decision Making and Performance Optimization
Artificial Intelligence
Helps AI teams work together to finish tasks faster.
LLMs as Policy-Agnostic Teammates: A Case Study in Human Proxy Design for Heterogeneous Agent Teams
Machine Learning (CS)
Computers learn to play games like people.