Multi-Modal Self-Supervised Semantic Communication
By: Hang Zhao , Hongru Li , Dongfang Xu and more
Potential Business Impact:
Teaches computers to share information more efficiently.
Semantic communication is emerging as a promising paradigm that focuses on the extraction and transmission of semantic meanings using deep learning techniques. While current research primarily addresses the reduction of semantic communication overhead, it often overlooks the training phase, which can incur significant communication costs in dynamic wireless environments. To address this challenge, we propose a multi-modal semantic communication system that leverages multi-modal self-supervised learning to enhance task-agnostic feature extraction. The proposed approach employs self-supervised learning during the pre-training phase to extract task-agnostic semantic features, followed by supervised fine-tuning for downstream tasks. This dual-phase strategy effectively captures both modality-invariant and modality-specific features while minimizing training-related communication overhead. Experimental results on the NYU Depth V2 dataset demonstrate that the proposed method significantly reduces training-related communication overhead while maintaining or exceeding the performance of existing supervised learning approaches. The findings underscore the advantages of multi-modal self-supervised learning in semantic communication, paving the way for more efficient and scalable edge inference systems.
Similar Papers
Exploring Textual Semantics Diversity for Image Transmission in Semantic Communication Systems using Visual Language Model
CV and Pattern Recognition
Sends pictures better by describing them with words.
Task-Oriented Semantic Communication in Large Multimodal Models-based Vehicle Networks
Artificial Intelligence
Helps car AI understand traffic better with less data.
Semantic Communications via Features Identification
Information Theory
Lets phones understand messages without sending all words.