Self-Guided Action Diffusion
By: Rhea Malhotra, Yuejiang Liu, Chelsea Finn
Potential Business Impact:
Robots learn to move better, faster, and cheaper.
Recent works have shown the promise of inference-time search over action samples for improving generative robot policies. In particular, optimizing cross-chunk coherence via bidirectional decoding has proven effective in boosting the consistency and reactivity of diffusion policies. However, this approach remains computationally expensive as the diversity of sampled actions grows. In this paper, we introduce self-guided action diffusion, a more efficient variant of bidirectional decoding tailored for diffusion-based policies. At the core of our method is to guide the proposal distribution at each diffusion step based on the prior decision. Experiments in simulation tasks show that the proposed self-guidance enables near-optimal performance at negligible inference cost. Notably, under a tight sampling budget, our method achieves up to 70% higher success rates than existing counterparts on challenging dynamic tasks. See project website at https://rhea-mal.github.io/selfgad.github.io.
Similar Papers
ADPro: a Test-time Adaptive Diffusion Policy for Robot Manipulation via Manifold and Initial Noise Constraints
Robotics
Robots learn to do tasks faster and better.
Real-Time Iteration Scheme for Diffusion Policy
Robotics
Makes robots move faster without retraining.
X-Diffusion: Training Diffusion Policies on Cross-Embodiment Human Demonstrations
Robotics
Teaches robots to copy human actions better.