Score: 1

XGrasp: Gripper-Aware Grasp Detection with Multi-Gripper Data Generation

Published: October 13, 2025 | arXiv ID: 2510.11036v1

By: Yeonseo Lee , Jungwook Mun , Hyosup Shin and more

Potential Business Impact:

Robots can grab more things with different hands.

Business Areas:
Image Recognition Data and Analytics, Software

Most robotic grasping methods are typically designed for single gripper types, which limits their applicability in real-world scenarios requiring diverse end-effectors. We propose XGrasp, a real-time gripper-aware grasp detection framework that efficiently handles multiple gripper configurations. The proposed method addresses data scarcity by systematically augmenting existing datasets with multi-gripper annotations. XGrasp employs a hierarchical two-stage architecture. In the first stage, a Grasp Point Predictor (GPP) identifies optimal locations using global scene information and gripper specifications. In the second stage, an Angle-Width Predictor (AWP) refines the grasp angle and width using local features. Contrastive learning in the AWP module enables zero-shot generalization to unseen grippers by learning fundamental grasping characteristics. The modular framework integrates seamlessly with vision foundation models, providing pathways for future vision-language capabilities. The experimental results demonstrate competitive grasp success rates across various gripper types, while achieving substantial improvements in inference speed compared to existing gripper-aware methods. Project page: https://sites.google.com/view/xgrasp

Page Count
9 pages

Category
Computer Science:
Robotics