Two-Steps Diffusion Policy for Robotic Manipulation via Genetic Denoising
By: Mateo Clemente , Leo Brunswic , Rui Heng Yang and more
Potential Business Impact:
Robots learn tasks with fewer tries.
Diffusion models, such as diffusion policy, have achieved state-of-the-art results in robotic manipulation by imitating expert demonstrations. While diffusion models were originally developed for vision tasks like image and video generation, many of their inference strategies have been directly transferred to control domains without adaptation. In this work, we show that by tailoring the denoising process to the specific characteristics of embodied AI tasks -- particularly structured, low-dimensional nature of action distributions -- diffusion policies can operate effectively with as few as 5 neural function evaluations (NFE). Building on this insight, we propose a population-based sampling strategy, genetic denoising, which enhances both performance and stability by selecting denoising trajectories with low out-of-distribution risk. Our method solves challenging tasks with only 2 NFE while improving or matching performance. We evaluate our approach across 14 robotic manipulation tasks from D4RL and Robomimic, spanning multiple action horizons and inference budgets. In over 2 million evaluations, our method consistently outperforms standard diffusion-based policies, achieving up to 20\% performance gains with significantly fewer inference steps.
Similar Papers
D3P: Dynamic Denoising Diffusion Policy via Reinforcement Learning
Robotics
Robot actions become faster without mistakes.
Real-Time Iteration Scheme for Diffusion Policy
Robotics
Makes robots move faster without retraining.
ADPro: a Test-time Adaptive Diffusion Policy for Robot Manipulation via Manifold and Initial Noise Constraints
Robotics
Robots learn to do tasks faster and better.