exUMI: Extensible Robot Teaching System with Action-aware Task-agnostic Tactile Representation
By: Yue Xu , Litao Wei , Pengyu An and more
Potential Business Impact:
Robots learn to feel and grip objects better.
Tactile-aware robot learning faces critical challenges in data collection and representation due to data scarcity and sparsity, and the absence of force feedback in existing systems. To address these limitations, we introduce a tactile robot learning system with both hardware and algorithm innovations. We present exUMI, an extensible data collection device that enhances the vanilla UMI with robust proprioception (via AR MoCap and rotary encoder), modular visuo-tactile sensing, and automated calibration, achieving 100% data usability. Building on an efficient collection of over 1 M tactile frames, we propose Tactile Prediction Pretraining (TPP), a representation learning framework through action-aware temporal tactile prediction, capturing contact dynamics and mitigating tactile sparsity. Real-world experiments show that TPP outperforms traditional tactile imitation learning. Our work bridges the gap between human tactile intuition and robot learning through co-designed hardware and algorithms, offering open-source resources to advance contact-rich manipulation research. Project page: https://silicx.github.io/exUMI.
Similar Papers
ActiveUMI: Robotic Manipulation with Active Perception from Robot-Free Human Demonstrations
Robotics
Teaches robots to do tasks by watching humans.
Simultaneous Tactile-Visual Perception for Learning Multimodal Robot Manipulation
Robotics
Robots see and feel to do tricky jobs.
MV-UMI: A Scalable Multi-View Interface for Cross-Embodiment Learning
Robotics
Robots learn better from more camera views.