Leveraging CVAE for Joint Configuration Estimation of Multifingered Grippers from Point Cloud Data
By: Julien Merand, Boris Meden, Mathieu Grossard
Potential Business Impact:
Lets robot hands know how to move.
This paper presents an efficient approach for determining the joint configuration of a multifingered gripper solely from the point cloud data of its poly-articulated chain, as generated by visual sensors, simulations or even generative neural networks. Well-known inverse kinematics (IK) techniques can provide mathematically exact solutions (when they exist) for joint configuration determination based solely on the fingertip pose, but often require post-hoc decision-making by considering the positions of all intermediate phalanges in the gripper's fingers, or rely on algorithms to numerically approximate solutions for more complex kinematics. In contrast, our method leverages machine learning to implicitly overcome these challenges. This is achieved through a Conditional Variational Auto-Encoder (CVAE), which takes point cloud data of key structural elements as input and reconstructs the corresponding joint configurations. We validate our approach on the MultiDex grasping dataset using the Allegro Hand, operating within 0.05 milliseconds and achieving accuracy comparable to state-of-the-art methods. This highlights the effectiveness of our pipeline for joint configuration estimation within the broader context of AI-driven techniques for grasp planning.
Similar Papers
Cross-Embodiment Dexterous Hand Articulation Generation via Morphology-Aware Learning
Robotics
Robots learn to grab things with different hands.
Vision-Guided Grasp Planning for Prosthetic Hands in Unstructured Environments
Robotics
Lets prosthetic hands grab things like real hands.
Towards a Multi-Embodied Grasping Agent
Robotics
Robots learn to grab anything with any hand.