Knowledge-Base based Semantic Image Transmission Using CLIP
By: Chongyang Li , Yanmei He , Tianqian Zhang and more
Potential Business Impact:
Sends pictures by describing their meaning, not pixels.
This paper proposes a novel knowledge-Base (KB) assisted semantic communication framework for image transmission. At the receiver, a Facebook AI Similarity Search (FAISS) based vector database is constructed by extracting semantic embeddings from images using the Contrastive Language-Image Pre-Training (CLIP) model. During transmission, the transmitter first extracts a 512-dimensional semantic feature using the CLIP model, then compresses it with a lightweight neural network for transmission. After receiving the signal, the receiver reconstructs the feature back to 512 dimensions and performs similarity matching from the KB to retrieve the most semantically similar image. Semantic transmission success is determined by category consistency between the transmitted and retrieved images, rather than traditional metrics like Peak Signal-to-Noise Ratio (PSNR). The proposed system prioritizes semantic accuracy, offering a new evaluation paradigm for semantic-aware communication systems. Experimental validation on CIFAR100 demonstrates the effectiveness of the framework in achieving semantic image transmission.
Similar Papers
Contrastive Language-Image Pre-Training Model based Semantic Communication Performance Optimization
Machine Learning (CS)
Lets computers share ideas without needing to train them.
InfoCLIP: Bridging Vision-Language Pretraining and Open-Vocabulary Semantic Segmentation via Information-Theoretic Alignment Transfer
CV and Pattern Recognition
Lets computers label picture parts with any words.
Compression Beyond Pixels: Semantic Compression with Multimodal Foundation Models
CV and Pattern Recognition
Makes pictures smaller, keeping their meaning.