GAT-Grasp: Gesture-Driven Affordance Transfer for Task-Aware Robotic Grasping
By: Ruixiang Wang , Huayi Zhou , Xinyue Yao and more
Potential Business Impact:
Robots learn to grab anything by watching hands.
Achieving precise and generalizable grasping across diverse objects and environments is essential for intelligent and collaborative robotic systems. However, existing approaches often struggle with ambiguous affordance reasoning and limited adaptability to unseen objects, leading to suboptimal grasp execution. In this work, we propose GAT-Grasp, a gesture-driven grasping framework that directly utilizes human hand gestures to guide the generation of task-specific grasp poses with appropriate positioning and orientation. Specifically, we introduce a retrieval-based affordance transfer paradigm, leveraging the implicit correlation between hand gestures and object affordances to extract grasping knowledge from large-scale human-object interaction videos. By eliminating the reliance on pre-given object priors, GAT-Grasp enables zero-shot generalization to novel objects and cluttered environments. Real-world evaluations confirm its robustness across diverse and unseen scenarios, demonstrating reliable grasp execution in complex task settings.
Similar Papers
AffordGrasp: In-Context Affordance Reasoning for Open-Vocabulary Task-Oriented Grasping in Clutter
Robotics
Robots learn to grab objects from simple instructions.
Grasp-HGN: Grasping the Unexpected
Robotics
Robotic hands better grab new things.
Attribute-Based Robotic Grasping with Data-Efficient Adaptation
Robotics
Teaches robots to grab new things quickly.