Structured Imitation Learning of Interactive Policies through Inverse Games
By: Max M. Sun, Todd Murphey
Potential Business Impact:
Teaches robots to work with people.
Generative model-based imitation learning methods have recently achieved strong results in learning high-complexity motor skills from human demonstrations. However, imitation learning of interactive policies that coordinate with humans in shared spaces without explicit communication remains challenging, due to the significantly higher behavioral complexity in multi-agent interactions compared to non-interactive tasks. In this work, we introduce a structured imitation learning framework for interactive policies by combining generative single-agent policy learning with a flexible yet expressive game-theoretic structure. Our method explicitly separates learning into two steps: first, we learn individual behavioral patterns from multi-agent demonstrations using standard imitation learning; then, we structurally learn inter-agent dependencies by solving an inverse game problem. Preliminary results in a synthetic 5-agent social navigation task show that our method significantly improves non-interactive policies and performs comparably to the ground truth interactive policy using only 50 demonstrations. These results highlight the potential of structured imitation learning in interactive settings.
Similar Papers
Imitation Learning Based on Disentangled Representation Learning of Behavioral Characteristics
Robotics
Robots change how they move based on your words.
Few-Shot Neuro-Symbolic Imitation Learning for Long-Horizon Planning and Acting
Robotics
Teaches robots complex tasks with few examples.
Steering Robots with Inference-Time Interactions
Robotics
Lets you fix robot mistakes without retraining.