UNIC: Learning Unified Multimodal Extrinsic Contact Estimation
By: Zhengtong Xu, Yuki Shirai
Potential Business Impact:
Helps robots feel and see objects better.
Contact-rich manipulation requires reliable estimation of extrinsic contacts-the interactions between a grasped object and its environment which provide essential contextual information for planning, control, and policy learning. However, existing approaches often rely on restrictive assumptions, such as predefined contact types, fixed grasp configurations, or camera calibration, that hinder generalization to novel objects and deployment in unstructured environments. In this paper, we present UNIC, a unified multimodal framework for extrinsic contact estimation that operates without any prior knowledge or camera calibration. UNIC directly encodes visual observations in the camera frame and integrates them with proprioceptive and tactile modalities in a fully data-driven manner. It introduces a unified contact representation based on scene affordance maps that captures diverse contact formations and employs a multimodal fusion mechanism with random masking, enabling robust multimodal representation learning. Extensive experiments demonstrate that UNIC performs reliably. It achieves a 9.6 mm average Chamfer distance error on unseen contact locations, performs well on unseen objects, remains robust under missing modalities, and adapts to dynamic camera viewpoints. These results establish extrinsic contact estimation as a practical and versatile capability for contact-rich manipulation.
Similar Papers
Simultaneous Extrinsic Contact and In-Hand Pose Estimation via Distributed Tactile Sensing
Robotics
Helps robots feel and see to grab things.
Learning to Act Through Contact: A Unified View of Multi-Task Robot Learning
Robotics
Robot learns many jobs with one brain.
UniTacHand: Unified Spatio-Tactile Representation for Human to Robotic Hand Skill Transfer
Robotics
Teaches robots to feel like humans.