Hybrid-Diffusion Models: Combining Open-loop Routines with Visuomotor Diffusion Policies
By: Jonne Van Haastregt , Bastian Orthmann , Michael C. Welle and more
Potential Business Impact:
Robots learn to do tricky jobs faster and better.
Despite the fact that visuomotor-based policies obtained via imitation learning demonstrate good performances in complex manipulation tasks, they usually struggle to achieve the same accuracy and speed as traditional control based methods. In this work, we introduce Hybrid-Diffusion models that combine open-loop routines with visuomotor diffusion policies. We develop Teleoperation Augmentation Primitives (TAPs) that allow the operator to perform predefined routines, such as locking specific axes, moving to perching waypoints, or triggering task-specific routines seamlessly during demonstrations. Our Hybrid-Diffusion method learns to trigger such TAPs during inference. We validate the method on challenging real-world tasks: Vial Aspiration, Open-Container Liquid Transfer, and container unscrewing. All experimental videos are available on the project's website: https://hybriddiffusion.github.io/
Similar Papers
Diffusion Models for Robotic Manipulation: A Survey
Robotics
Teaches robots to pick up and move things.
Learning Generalizable Visuomotor Policy through Dynamics-Alignment
Robotics
Teaches robots to learn from mistakes better.
3D Flow Diffusion Policy: Visuomotor Policy Learning via Generating Flow in 3D Space
Robotics
Robots learn to grab and move things better.