Grasp-HGN: Grasping the Unexpected
By: Mehrshad Zandigohar, Mallesham Dasari, Gunar Schirner
Potential Business Impact:
Robotic hands better grab new things.
For transradial amputees, robotic prosthetic hands promise to regain the capability to perform daily living activities. To advance next-generation prosthetic hand control design, it is crucial to address current shortcomings in robustness to out of lab artifacts, and generalizability to new environments. Due to the fixed number of object to interact with in existing datasets, contrasted with the virtually infinite variety of objects encountered in the real world, current grasp models perform poorly on unseen objects, negatively affecting users' independence and quality of life. To address this: (i) we define semantic projection, the ability of a model to generalize to unseen object types and show that conventional models like YOLO, despite 80% training accuracy, drop to 15% on unseen objects. (ii) we propose Grasp-LLaVA, a Grasp Vision Language Model enabling human-like reasoning to infer the suitable grasp type estimate based on the object's physical characteristics resulting in a significant 50.2% accuracy over unseen object types compared to 36.7% accuracy of an SOTA grasp estimation model. Lastly, to bridge the performance-latency gap, we propose Hybrid Grasp Network (HGN), an edge-cloud deployment infrastructure enabling fast grasp estimation on edge and accurate cloud inference as a fail-safe, effectively expanding the latency vs. accuracy Pareto. HGN with confidence calibration (DC) enables dynamic switching between edge and cloud models, improving semantic projection accuracy by 5.6% (to 42.3%) with 3.5x speedup over the unseen object types. Over a real-world sample mix, it reaches 86% average accuracy (12.2% gain over edge-only), and 2.2x faster inference than Grasp-LLaVA alone.
Similar Papers
Vision-Guided Grasp Planning for Prosthetic Hands in Unstructured Environments
Robotics
Lets prosthetic hands grab things like real hands.
VLAD-Grasp: Zero-shot Grasp Detection via Vision-Language Models
Robotics
Robots can grab new things without learning.
Bring Your Own Grasp Generator: Leveraging Robot Grasp Generation for Prosthetic Grasping
Robotics
Lets prosthetic hands grab things faster and easier.