VisuoTactile 6D Pose Estimation of an In-Hand Object using Vision and Tactile Sensor Data
By: Snehal s. Dikhale , Karankumar Patel , Daksh Dhingra and more
Potential Business Impact:
Robot hands know object position using touch and sight.
Knowledge of the 6D pose of an object can benefit in-hand object manipulation. In-hand 6D object pose estimation is challenging because of heavy occlusion produced by the robot's grippers, which can have an adverse effect on methods that rely on vision data only. Many robots are equipped with tactile sensors at their fingertips that could be used to complement vision data. In this paper, we present a method that uses both tactile and vision data to estimate the pose of an object grasped in a robot's hand. To address challenges like lack of standard representation for tactile data and sensor fusion, we propose the use of point clouds to represent object surfaces in contact with the tactile sensor and present a network architecture based on pixel-wise dense fusion. We also extend NVIDIA's Deep Learning Dataset Synthesizer to produce synthetic photo-realistic vision data and corresponding tactile point clouds. Results suggest that using tactile data in addition to vision data improves the 6D pose estimate, and our network generalizes successfully from synthetic training to real physical robots.
Similar Papers
In-Hand Object Pose Estimation via Visual-Tactile Fusion
Robotics
Robots can grab and move things better.
Visuo-Tactile Object Pose Estimation for a Multi-Finger Robot Hand with Low-Resolution In-Hand Tactile Sensing
Robotics
Robots feel objects to know where they are.
Object Pose Estimation through Dexterous Touch
Robotics
Robot hand learns object shape and position by touch.