One-Shot Dual-Arm Imitation Learning
By: Yilong Wang, Edward Johns
Potential Business Impact:
Robots learn tasks from one quick watch.
We introduce One-Shot Dual-Arm Imitation Learning (ODIL), which enables dual-arm robots to learn precise and coordinated everyday tasks from just a single demonstration of the task. ODIL uses a new three-stage visual servoing (3-VS) method for precise alignment between the end-effector and target object, after which replay of the demonstration trajectory is sufficient to perform the task. This is achieved without requiring prior task or object knowledge, or additional data collection and training following the single demonstration. Furthermore, we propose a new dual-arm coordination paradigm for learning dual-arm tasks from a single demonstration. ODIL was tested on a real-world dual-arm robot, demonstrating state-of-the-art performance across six precise and coordinated tasks in both 4-DoF and 6-DoF settings, and showing robustness in the presence of distractor objects and partial occlusions. Videos are available at: https://www.robot-learning.uk/one-shot-dual-arm.
Similar Papers
Dexterous Manipulation through Imitation Learning: A Survey
Robotics
Robots learn to pick up and move things like humans.
Correspondence-Oriented Imitation Learning: Flexible Visuomotor Control with 3D Conditioning
Robotics
Teaches robots to copy human movements precisely.
Dual Iterative Learning Control for Multiple-Input Multiple-Output Dynamics with Validation in Robotic Systems
Systems and Control
Robots learn to move perfectly without help.