Score: 1

Collaborative Representation Learning for Alignment of Tactile, Language, and Vision Modalities

Published: November 14, 2025 | arXiv ID: 2511.11512v1

By: Yiyun Zhou , Mingjing Xu , Jingwei Shi and more

Potential Business Impact:

Robots learn to feel and understand objects better.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Tactile sensing offers rich and complementary information to vision and language, enabling robots to perceive fine-grained object properties. However, existing tactile sensors lack standardization, leading to redundant features that hinder cross-sensor generalization. Moreover, existing methods fail to fully integrate the intermediate communication among tactile, language, and vision modalities. To address this, we propose TLV-CoRe, a CLIP-based Tactile-Language-Vision Collaborative Representation learning method. TLV-CoRe introduces a Sensor-Aware Modulator to unify tactile features across different sensors and employs tactile-irrelevant decoupled learning to disentangle irrelevant tactile features. Additionally, a Unified Bridging Adapter is introduced to enhance tri-modal interaction within the shared representation space. To fairly evaluate the effectiveness of tactile models, we further propose the RSS evaluation framework, focusing on Robustness, Synergy, and Stability across different methods. Experimental results demonstrate that TLV-CoRe significantly improves sensor-agnostic representation learning and cross-modal alignment, offering a new direction for multimodal tactile representation.

Country of Origin
🇬🇧 🇨🇳 United Kingdom, China

Page Count
14 pages

Category
Computer Science:
Robotics