RA-DP: Rapid Adaptive Diffusion Policy for Training-Free High-frequency Robotics Replanning
By: Xi Ye , Rui Heng Yang , Jun Jin and more
Potential Business Impact:
Robots quickly learn new tasks in changing places.
Diffusion models exhibit impressive scalability in robotic task learning, yet they struggle to adapt to novel, highly dynamic environments. This limitation primarily stems from their constrained replanning ability: they either operate at a low frequency due to a time-consuming iterative sampling process, or are unable to adapt to unforeseen feedback in case of rapid replanning. To address these challenges, we propose RA-DP, a novel diffusion policy framework with training-free high-frequency replanning ability that solves the above limitations in adapting to unforeseen dynamic environments. Specifically, our method integrates guidance signals which are often easily obtained in the new environment during the diffusion sampling process, and utilizes a novel action queue mechanism to generate replanned actions at every denoising step without retraining, thus forming a complete training-free framework for robot motion adaptation in unseen environments. Extensive evaluations have been conducted in both well-recognized simulation benchmarks and real robot tasks. Results show that RA-DP outperforms the state-of-the-art diffusion-based methods in terms of replanning frequency and success rate. Moreover, we show that our framework is theoretically compatible with any training-free guidance signal.
Similar Papers
Adaptive Diffusion Policy Optimization for Robotic Manipulation
Robotics
Teaches robots to learn tasks faster and better.
ADPro: a Test-time Adaptive Diffusion Policy for Robot Manipulation via Manifold and Initial Noise Constraints
Robotics
Robots learn to do tasks faster and better.
CDP: Towards Robust Autoregressive Visuomotor Policy Learning via Causal Diffusion
CV and Pattern Recognition
Robots learn better by remembering past actions.