Score: 2

Two-Steps Diffusion Policy for Robotic Manipulation via Genetic Denoising

Published: October 24, 2025 | arXiv ID: 2510.21991v1

By: Mateo Clemente , Leo Brunswic , Rui Heng Yang and more

BigTech Affiliations: Huawei

Potential Business Impact:

Robots learn tasks with fewer tries.

Business Areas:
Industrial Automation Manufacturing, Science and Engineering

Diffusion models, such as diffusion policy, have achieved state-of-the-art results in robotic manipulation by imitating expert demonstrations. While diffusion models were originally developed for vision tasks like image and video generation, many of their inference strategies have been directly transferred to control domains without adaptation. In this work, we show that by tailoring the denoising process to the specific characteristics of embodied AI tasks -- particularly structured, low-dimensional nature of action distributions -- diffusion policies can operate effectively with as few as 5 neural function evaluations (NFE). Building on this insight, we propose a population-based sampling strategy, genetic denoising, which enhances both performance and stability by selecting denoising trajectories with low out-of-distribution risk. Our method solves challenging tasks with only 2 NFE while improving or matching performance. We evaluate our approach across 14 robotic manipulation tasks from D4RL and Robomimic, spanning multiple action horizons and inference budgets. In over 2 million evaluations, our method consistently outperforms standard diffusion-based policies, achieving up to 20\% performance gains with significantly fewer inference steps.

Country of Origin
🇨🇳 China

Page Count
23 pages

Category
Computer Science:
Robotics