Score: 0

Caching Techniques for Reducing the Communication Cost of Federated Learning in IoT Environments

Published: July 19, 2025 | arXiv ID: 2507.17772v1

By: Ahmad Alhonainy, Praveen Rao

Potential Business Impact:

Smarter sharing makes AI learn faster, cheaper.

Plain English Summary

Imagine your phone or smart watch learning new tricks, like recognizing your voice better, without sending your personal data to a central server. This new method makes that process much more efficient by smartly deciding which updates are important to send, saving data and making these smart devices work better for you. This means your devices can get smarter faster and more reliably, even with limited internet access, which is great for things like smart homes and healthcare.

Federated Learning (FL) allows multiple distributed devices to jointly train a shared model without centralizing data, but communication cost remains a major bottleneck, especially in resource-constrained environments. This paper introduces caching strategies - FIFO, LRU, and Priority-Based - to reduce unnecessary model update transmissions. By selectively forwarding significant updates, our approach lowers bandwidth usage while maintaining model accuracy. Experiments on CIFAR-10 and medical datasets show reduced communication with minimal accuracy loss. Results confirm that intelligent caching improves scalability, memory efficiency, and supports reliable FL in edge IoT networks, making it practical for deployment in smart cities, healthcare, and other latency-sensitive applications.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
5 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing