Offline Imitation Learning upon Arbitrary Demonstrations by Pre-Training Dynamics Representations
By: Haitong Ma , Bo Dai , Zhaolin Ren and more
Potential Business Impact:
Teaches robots to learn from less data.
Limited data has become a major bottleneck in scaling up offline imitation learning (IL). In this paper, we propose enhancing IL performance under limited expert data by introducing a pre-training stage that learns dynamics representations, derived from factorizations of the transition dynamics. We first theoretically justify that the optimal decision variable of offline IL lies in the representation space, significantly reducing the parameters to learn in the downstream IL. Moreover, the dynamics representations can be learned from arbitrary data collected with the same dynamics, allowing the reuse of massive non-expert data and mitigating the limited data issues. We present a tractable loss function inspired by noise contrastive estimation to learn the dynamics representations at the pre-training stage. Experiments on MuJoCo demonstrate that our proposed algorithm can mimic expert policies with as few as a single trajectory. Experiments on real quadrupeds show that we can leverage pre-trained dynamics representations from simulator data to learn to walk from a few real-world demonstrations.
Similar Papers
Using Non-Expert Data to Robustify Imitation Learning via Offline Reinforcement Learning
Robotics
Teaches robots to learn from bad examples.
Using Non-Expert Data to Robustify Imitation Learning via Offline Reinforcement Learning
Robotics
Teaches robots to learn from mistakes and play.
Train Robots in a JIF: Joint Inverse and Forward Dynamics with Human and Robot Demonstrations
Robotics
Robots learn faster from watching people move things.