Score: 0

Offline Imitation Learning upon Arbitrary Demonstrations by Pre-Training Dynamics Representations

Published: August 20, 2025 | arXiv ID: 2508.14383v1

By: Haitong Ma , Bo Dai , Zhaolin Ren and more

Potential Business Impact:

Teaches robots to learn from less data.

Business Areas:
Motion Capture Media and Entertainment, Video

Limited data has become a major bottleneck in scaling up offline imitation learning (IL). In this paper, we propose enhancing IL performance under limited expert data by introducing a pre-training stage that learns dynamics representations, derived from factorizations of the transition dynamics. We first theoretically justify that the optimal decision variable of offline IL lies in the representation space, significantly reducing the parameters to learn in the downstream IL. Moreover, the dynamics representations can be learned from arbitrary data collected with the same dynamics, allowing the reuse of massive non-expert data and mitigating the limited data issues. We present a tractable loss function inspired by noise contrastive estimation to learn the dynamics representations at the pre-training stage. Experiments on MuJoCo demonstrate that our proposed algorithm can mimic expert policies with as few as a single trajectory. Experiments on real quadrupeds show that we can leverage pre-trained dynamics representations from simulator data to learn to walk from a few real-world demonstrations.

Country of Origin
🇺🇸 United States

Page Count
7 pages

Category
Computer Science:
Robotics