Score: 1

VisuoTactile 6D Pose Estimation of an In-Hand Object using Vision and Tactile Sensor Data

Published: January 4, 2026 | arXiv ID: 2601.01675v1

By: Snehal s. Dikhale , Karankumar Patel , Daksh Dhingra and more

BigTech Affiliations: University of Washington

Potential Business Impact:

Robot hands know object position using touch and sight.

Business Areas:
Image Recognition Data and Analytics, Software

Knowledge of the 6D pose of an object can benefit in-hand object manipulation. In-hand 6D object pose estimation is challenging because of heavy occlusion produced by the robot's grippers, which can have an adverse effect on methods that rely on vision data only. Many robots are equipped with tactile sensors at their fingertips that could be used to complement vision data. In this paper, we present a method that uses both tactile and vision data to estimate the pose of an object grasped in a robot's hand. To address challenges like lack of standard representation for tactile data and sensor fusion, we propose the use of point clouds to represent object surfaces in contact with the tactile sensor and present a network architecture based on pixel-wise dense fusion. We also extend NVIDIA's Deep Learning Dataset Synthesizer to produce synthetic photo-realistic vision data and corresponding tactile point clouds. Results suggest that using tactile data in addition to vision data improves the 6D pose estimate, and our network generalizes successfully from synthetic training to real physical robots.

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
Robotics