VLAD-Grasp: Zero-shot Grasp Detection via Vision-Language Models
By: Manav Kulshrestha , S. Talha Bukhari , Damon Conover and more
Potential Business Impact:
Robots can grab new things without learning.
Robotic grasping is a fundamental capability for autonomous manipulation; however, most existing methods rely on large-scale expert annotations and necessitate retraining to handle new objects. We present VLAD-Grasp, a Vision-Language model Assisted zero-shot approach for Detecting grasps. From a single RGB-D image, our method (1) prompts a large vision-language model to generate a goal image where a straight rod "impales" the object, representing an antipodal grasp, (2) predicts depth and segmentation to lift this generated image into 3D, and (3) aligns generated and observed object point clouds via principal component analysis and correspondence-free optimization to recover an executable grasp pose. Unlike prior work, our approach is training-free and does not rely on curated grasp datasets. Despite this, VLAD-Grasp achieves performance that is competitive with or superior to that of state-of-the-art supervised models on the Cornell and Jacquard datasets. We further demonstrate zero-shot generalization to novel real-world objects on a Franka Research 3 robot, highlighting vision-language foundation models as powerful priors for robotic manipulation.
Similar Papers
Free-form language-based robotic reasoning and grasping
Robotics
Robots grab objects from messy piles using words.
ZeroGrasp: Zero-Shot Shape Reconstruction Enabled Robotic Grasping
Robotics
Robots can grab things better by seeing them.
Grasp-HGN: Grasping the Unexpected
Robotics
Robotic hands better grab new things.