Score: 1

Contrastive Language-Image Pre-Training Model based Semantic Communication Performance Optimization

Published: July 10, 2025 | arXiv ID: 2507.08873v1

By: Shaoran Yang , Dongyu Wei , Hanzhi Yu and more

Potential Business Impact:

Lets computers share ideas without needing to train them.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In this paper, a novel contrastive language-image pre-training (CLIP) model based semantic communication framework is designed. Compared to standard neural network (e.g.,convolutional neural network) based semantic encoders and decoders that require joint training over a common dataset, our CLIP model based method does not require any training procedures thus enabling a transmitter to extract data meanings of the original data without neural network model training, and the receiver to train a neural network for follow-up task implementation without the communications with the transmitter. Next, we investigate the deployment of the CLIP model based semantic framework over a noisy wireless network. Since the semantic information generated by the CLIP model is susceptible to wireless noise and the spectrum used for semantic information transmission is limited, it is necessary to jointly optimize CLIP model architecture and spectrum resource block (RB) allocation to maximize semantic communication performance while considering wireless noise, the delay and energy used for semantic communication. To achieve this goal, we use a proximal policy optimization (PPO) based reinforcement learning (RL) algorithm to learn how wireless noise affect the semantic communication performance thus finding optimal CLIP model and RB for each user. Simulation results show that our proposed method improves the convergence rate by up to 40%, and the accumulated reward by 4x compared to soft actor-critic.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ United States, China

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)