Focused Blind Switching Manipulation Based on Constrained and Regional Touch States of Multi-Fingered Hand Using Deep Learning
By: Satoshi Funabashi , Atsumu Hiramoto , Naoya Chiba and more
Potential Business Impact:
Helps robots grasp and open things better.
To achieve a desired grasping posture (including object position and orientation), multi-finger motions need to be conducted according to the the current touch state. Specifically, when subtle changes happen during correcting the object state, not only proprioception but also tactile information from the entire hand can be beneficial. However, switching motions with high-DOFs of multiple fingers and abundant tactile information is still challenging. In this study, we propose a loss function with constraints of touch states and an attention mechanism for focusing on important modalities depending on the touch states. The policy model is AE-LSTM which consists of Autoencoder (AE) which compresses abundant tactile information and Long Short-Term Memory (LSTM) which switches the motion depending on the touch states. Motion for cap-opening was chosen as a target task which consists of subtasks of sliding an object and opening its cap. As a result, the proposed method achieved the best success rates with a variety of objects for real time cap-opening manipulation. Furthermore, we could confirm that the proposed model acquired the features of each subtask and attention on specific modalities.
Similar Papers
Construction of a Multiple-DOF Under-actuated Gripper with Force-Sensing via Deep Learning
Robotics
Robot hands feel objects without touching them.
Contrastive Learning for Continuous Touch-Based Authentication
Cryptography and Security
Phone recognizes you by how you touch it.
Grasp Prediction based on Local Finger Motion Dynamics
Human-Computer Interaction
Predicts what you'll grab before you touch it.