CCDP: Composition of Conditional Diffusion Policies with Guided Sampling
By: Amirreza Razmjoo , Sylvain Calinon , Michael Gienger and more
Potential Business Impact:
Robots learn to fix mistakes without trying again.
Imitation Learning offers a promising approach to learn directly from data without requiring explicit models, simulations, or detailed task definitions. During inference, actions are sampled from the learned distribution and executed on the robot. However, sampled actions may fail for various reasons, and simply repeating the sampling step until a successful action is obtained can be inefficient. In this work, we propose an enhanced sampling strategy that refines the sampling distribution to avoid previously unsuccessful actions. We demonstrate that by solely utilizing data from successful demonstrations, our method can infer recovery actions without the need for additional exploratory behavior or a high-level controller. Furthermore, we leverage the concept of diffusion model decomposition to break down the primary problem (which may require long-horizon history to manage failures) into multiple smaller, more manageable sub-problems in learning, data collection, and inference, thereby enabling the system to adapt to variable failure counts. Our approach yields a low-level controller that dynamically adjusts its sampling space to improve efficiency when prior samples fall short. We validate our method across several tasks, including door opening with unknown directions, object manipulation, and button-searching scenarios, demonstrating that our approach outperforms traditional baselines.
Similar Papers
Latent Diffusion Planning for Imitation Learning
Robotics
Teaches robots to learn from less perfect examples.
CDP: Towards Robust Autoregressive Visuomotor Policy Learning via Causal Diffusion
CV and Pattern Recognition
Robots learn better by remembering past actions.
Reinforcement Learning via Implicit Imitation Guidance
Machine Learning (CS)
Teaches robots new skills faster with smart guesses.