DG16M: A Large-Scale Dataset for Dual-Arm Grasping with Force-Optimized Grasps
By: Md Faizal Karim , Mohammed Saad Hashmi , Shreya Bollimuntha and more
Potential Business Impact:
Robots learn to grab things better together.
Dual-arm robotic grasping is crucial for handling large objects that require stable and coordinated manipulation. While single-arm grasping has been extensively studied, datasets tailored for dual-arm settings remain scarce. We introduce a large-scale dataset of 16 million dual-arm grasps, evaluated under improved force-closure constraints. Additionally, we develop a benchmark dataset containing 300 objects with approximately 30,000 grasps, evaluated in a physics simulation environment, providing a better grasp quality assessment for dual-arm grasp synthesis methods. Finally, we demonstrate the effectiveness of our dataset by training a Dual-Arm Grasp Classifier network that outperforms the state-of-the-art methods by 15\%, achieving higher grasp success rates and improved generalization across objects.
Similar Papers
GraspFactory: A Large Object-Centric Grasping Dataset
Robotics
Teaches robots to grab any new object.
ScaleADFG: Affordance-based Dexterous Functional Grasping via Scalable Dataset
Robotics
Robots can now grab objects of any size.
GraspClutter6D: A Large-scale Real-world Dataset for Robust Perception and Grasping in Cluttered Scenes
Robotics
Teaches robots to grab things in messy places.