Diffusion Dynamics Models with Generative State Estimation for Cloth Manipulation
By: Tongxuan Tian , Haoyang Li , Bo Ai and more
Potential Business Impact:
Robots can now fold clothes perfectly.
Cloth manipulation is challenging due to its highly complex dynamics, near-infinite degrees of freedom, and frequent self-occlusions, which complicate both state estimation and dynamics modeling. Inspired by recent advances in generative models, we hypothesize that these expressive models can effectively capture intricate cloth configurations and deformation patterns from data. Therefore, we propose a diffusion-based generative approach for both perception and dynamics modeling. Specifically, we formulate state estimation as reconstructing full cloth states from partial observations and dynamics modeling as predicting future states given the current state and robot actions. Leveraging a transformer-based diffusion model, our method achieves accurate state reconstruction and reduces long-horizon dynamics prediction errors by an order of magnitude compared to prior approaches. We integrate our dynamics models with model predictive control and show that our framework enables effective cloth folding on real robotic systems, demonstrating the potential of generative models for deformable object manipulation under partial observability and complex dynamics.
Similar Papers
D-Garment: Physics-Conditioned Latent Diffusion for Dynamic Garment Deformations
CV and Pattern Recognition
Makes virtual clothes look real on moving bodies.
Controllable Motion Generation via Diffusion Modal Coupling
Robotics
Robots can now choose the best way to move.
Diffusion Models for Robotic Manipulation: A Survey
Robotics
Teaches robots to pick up and move things.