Federated Learning Assisted Edge Caching Scheme Based on Lightweight Architecture DDPM
By: Xun Li , Qiong Wu , Pingyi Fan and more
Potential Business Impact:
Faster internet by guessing what you'll watch next.
Edge caching is an emerging technology that empowers caching units at edge nodes, allowing users to fetch contents of interest that have been pre-cached at the edge nodes. The key to pre-caching is to maximize the cache hit percentage for cached content without compromising users' privacy. In this letter, we propose a federated learning (FL) assisted edge caching scheme based on lightweight architecture denoising diffusion probabilistic model (LDPM). Our simulation results verify that our proposed scheme achieves a higher cache hit percentage compared to existing FL-based methods and baseline methods.
Similar Papers
Federated Distillation Assisted Vehicle Edge Caching Scheme Based on Lightweight DDPM
Machine Learning (CS)
Cars learn what you want before you ask.
Lightweight Federated Learning in Mobile Edge Computing with Statistical and Device Heterogeneity Awareness
Systems and Control
Makes phones learn together without sharing private data.
Federated Learning for Diffusion Models
Machine Learning (CS)
Makes AI learn better from scattered, different data.