Federated Distillation Assisted Vehicle Edge Caching Scheme Based on Lightweight DDPM
By: Xun Li , Qiong Wu , Pingyi Fan and more
Potential Business Impact:
Cars learn what you want before you ask.
Vehicle edge caching is a promising technology that can significantly reduce the latency for vehicle users (VUs) to access content by pre-caching user-interested content at edge nodes. It is crucial to accurately predict the content that VUs are interested in without exposing their privacy. Traditional federated learning (FL) can protect user privacy by sharing models rather than raw data. However, the training of FL requires frequent model transmission, which can result in significant communication overhead. Additionally, vehicles may leave the road side unit (RSU) coverage area before training is completed, leading to training failures. To address these issues, in this letter, we propose a federated distillation-assisted vehicle edge caching scheme based on lightweight denoising diffusion probabilistic model (LDPM). The simulation results demonstrate that the proposed vehicle edge caching scheme has good robustness to variations in vehicle speed, significantly reducing communication overhead and improving cache hit percentage.
Similar Papers
Federated Learning Assisted Edge Caching Scheme Based on Lightweight Architecture DDPM
Networking and Internet Architecture
Faster internet by guessing what you'll watch next.
Targeted Attacks and Defenses for Distributed Federated Learning in Vehicular Networks
Networking and Internet Architecture
Makes self-driving cars safer from hackers.
Decentralized Fairness Aware Multi Task Federated Learning for VR Network
Machine Learning (CS)
Makes VR work better without wires.