Automated Action Generation based on Action Field for Robotic Garment Manipulation
By: Hu Cheng, Fuyuki Tokuda, Kazuhiro Kosuge
Potential Business Impact:
Robots can now fold clothes much faster.
Garment manipulation using robotic systems is a challenging task due to the diverse shapes and deformable nature of fabric. In this paper, we propose a novel method for robotic garment manipulation that significantly improves the accuracy while reducing computational time compared to previous approaches. Our method features an action generator that directly interprets scene images and generates pixel-wise end-effector action vectors using a neural network. The network also predicts a manipulation score map that ranks potential actions, allowing the system to select the most effective action. Extensive simulation experiments demonstrate that our method achieves higher unfolding and alignment performances and faster computation time than previous approaches. Real-world experiments show that the proposed method generalizes well to different garment types and successfully flattens garments.
Similar Papers
GraphGarment: Learning Garment Dynamics for Bimanual Cloth Manipulation Tasks
Robotics
Robots learn to hang clothes by watching them move.
Robotic Automation in Apparel Manufacturing: A Novel Approach to Fabric Handling and Sewing
Robotics
Robots can now sew clothes by making fabric stiff.
DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable Policy
Robotics
Teaches robots to fold clothes like humans.