Enabling Training-Free Semantic Communication Systems with Generative Diffusion Models
By: Shunpu Tang , Yuanyuan Jia , Qianqian Yang and more
Potential Business Impact:
Lets phones send clear messages even with bad signals.
Semantic communication (SemCom) has recently emerged as a promising paradigm for next-generation wireless systems. Empowered by advanced artificial intelligence (AI) technologies, SemCom has achieved significant improvements in transmission quality and efficiency. However, existing SemCom systems either rely on training over large datasets and specific channel conditions or suffer from performance degradation under channel noise when operating in a training-free manner. To address these issues, we explore the use of generative diffusion models (GDMs) as training-free SemCom systems. Specifically, we design a semantic encoding and decoding method based on the inversion and sampling process of the denoising diffusion implicit model (DDIM), which introduces a two-stage forward diffusion process, split between the transmitter and receiver to enhance robustness against channel noise. Moreover, we optimize sampling steps to compensate for the increased noise level caused by channel noise. We also conduct a brief analysis to provide insights about this design. Simulations on the Kodak dataset validate that the proposed system outperforms the existing baseline SemCom systems across various metrics.
Similar Papers
Generative AI Meets 6G and Beyond: Diffusion Models for Semantic Communications
Signal Processing
Lets phones send messages with fewer words.
Latent Diffusion Model Based Denoising Receiver for 6G Semantic Communication: From Stochastic Differential Theory to Application
Machine Learning (CS)
Makes messages clear even with bad signals.
Semantic Communication based on Generative AI: A New Approach to Image Compression and Edge Optimization
CV and Pattern Recognition
Makes phones send pictures faster and smarter.