On Model Protection in Federated Learning against Eavesdropping Attacks
By: Dipankar Maity, Kushal Chakrabarti
Potential Business Impact:
Keeps secret computer learning from being spied on.
In this study, we investigate the protection offered by federated learning algorithms against eavesdropping adversaries. In our model, the adversary is capable of intercepting model updates transmitted from clients to the server, enabling it to create its own estimate of the model. Unlike previous research, which predominantly focuses on safeguarding client data, our work shifts attention protecting the client model itself. Through a theoretical analysis, we examine how various factors, such as the probability of client selection, the structure of local objective functions, global aggregation at the server, and the eavesdropper's capabilities, impact the overall level of protection. We further validate our findings through numerical experiments, assessing the protection by evaluating the model accuracy achieved by the adversary. Finally, we compare our results with methods based on differential privacy, underscoring their limitations in this specific context.
Similar Papers
An Empirical Analysis of Secure Federated Learning for Autonomous Vehicle Applications
Distributed, Parallel, and Cluster Computing
Protects self-driving cars from hackers.
Towards Privacy-Preserving Data-Driven Education: The Potential of Federated Learning
Machine Learning (CS)
Keeps student data private while still learning.
Differential Privacy in Federated Learning: Mitigating Inference Attacks with Randomized Response
Cryptography and Security
Keeps your private data safe while training AI.