Vision-Guided Grasp Planning for Prosthetic Hands in Unstructured Environments
By: Shifa Sulaiman , Akash Bachhar , Ming Shen and more
Potential Business Impact:
Lets prosthetic hands grab things like real hands.
Recent advancements in prosthetic technology have increasingly focused on enhancing dexterity and autonomy through intelligent control systems. Vision-based approaches offer promising results for enabling prosthetic hands to interact more naturally with diverse objects in dynamic environments. Building on this foundation, the paper presents a vision-guided grasping algorithm for a prosthetic hand, integrating perception, planning, and control for dexterous manipulation. A camera mounted on the set up captures the scene, and a Bounding Volume Hierarchy (BVH)-based vision algorithm is employed to segment an object for grasping and define its bounding box. Grasp contact points are then computed by generating candidate trajectories using Rapidly-exploring Random Tree Star algorithm, and selecting fingertip end poses based on the minimum Euclidean distance between these trajectories and the objects point cloud. Each finger grasp pose is determined independently, enabling adaptive, object-specific configurations. Damped Least Square (DLS) based Inverse kinematics solver is used to compute the corresponding joint angles, which are subsequently transmitted to the finger actuators for execution. This modular pipeline enables per-finger grasp planning and supports real-time adaptability in unstructured environments. The proposed method is validated in simulation, and experimental integration on a Linker Hand O7 platform.
Similar Papers
A Vision-Enabled Prosthetic Hand for Children with Upper Limb Disabilities
Robotics
Helps kids with missing hands grasp things better.
Grasp-HGN: Grasping the Unexpected
Robotics
Robotic hands better grab new things.
Using Visual Language Models to Control Bionic Hands: Assessment of Object Perception and Grasp Inference
Robotics
Helps robot hands see and grab things better.