Score: 0

Train Robots in a JIF: Joint Inverse and Forward Dynamics with Human and Robot Demonstrations

Published: March 15, 2025 | arXiv ID: 2503.12297v3

By: Gagan Khandate , Boxuan Wang , Sarah Park and more

Potential Business Impact:

Robots learn faster from watching people move things.

Business Areas:
Industrial Automation Manufacturing, Science and Engineering

Pre-training on large datasets of robot demonstrations is a powerful technique for learning diverse manipulation skills but is often limited by the high cost and complexity of collecting robot-centric data, especially for tasks requiring tactile feedback. This work addresses these challenges by introducing a novel method for pre-training with multi-modal human demonstrations. Our approach jointly learns inverse and forward dynamics to extract latent state representations, towards learning manipulation specific representations. This enables efficient fine-tuning with only a small number of robot demonstrations, significantly improving data efficiency. Furthermore, our method allows for the use of multi-modal data, such as combination of vision and touch for manipulation. By leveraging latent dynamics modeling and tactile sensing, this approach paves the way for scalable robot manipulation learning based on human demonstrations.

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
Robotics