Caching Techniques for Reducing the Communication Cost of Federated Learning in IoT Environments
By: Ahmad Alhonainy, Praveen Rao
Potential Business Impact:
Smarter sharing makes AI learn faster, cheaper.
Federated Learning (FL) allows multiple distributed devices to jointly train a shared model without centralizing data, but communication cost remains a major bottleneck, especially in resource-constrained environments. This paper introduces caching strategies - FIFO, LRU, and Priority-Based - to reduce unnecessary model update transmissions. By selectively forwarding significant updates, our approach lowers bandwidth usage while maintaining model accuracy. Experiments on CIFAR-10 and medical datasets show reduced communication with minimal accuracy loss. Results confirm that intelligent caching improves scalability, memory efficiency, and supports reliable FL in edge IoT networks, making it practical for deployment in smart cities, healthcare, and other latency-sensitive applications.
Similar Papers
Enhancing Communication Efficiency in FL with Adaptive Gradient Quantization and Communication Frequency Optimization
Distributed, Parallel, and Cluster Computing
Makes phones train AI without sharing private info.
Communication-Efficient Zero-Order and First-Order Federated Learning Methods over Wireless Networks
Machine Learning (CS)
Makes phones learn together without sharing secrets.
Optimal Batch-Size Control for Low-Latency Federated Learning with Device Heterogeneity
Machine Learning (CS)
Makes smart devices learn faster, privately.