Fast, Private, and Protected: Safeguarding Data Privacy and Defending Against Model Poisoning Attacks in Federated Learning
By: Nicolas Riccieri Gardin Assumpcao, Leandro Villas
Potential Business Impact:
Protects shared computer learning from bad guys.
Federated Learning (FL) is a distributed training paradigm wherein participants collaborate to build a global model while ensuring the privacy of the involved data, which remains stored on participant devices. However, proposals aiming to ensure such privacy also make it challenging to protect against potential attackers seeking to compromise the training outcome. In this context, we present Fast, Private, and Protected (FPP), a novel approach that aims to safeguard federated training while enabling secure aggregation to preserve data privacy. This is accomplished by evaluating rounds using participants' assessments and enabling training recovery after an attack. FPP also employs a reputation-based mechanism to mitigate the participation of attackers. We created a dockerized environment to validate the performance of FPP compared to other approaches in the literature (FedAvg, Power-of-Choice, and aggregation via Trimmed Mean and Median). Our experiments demonstrate that FPP achieves a rapid convergence rate and can converge even in the presence of malicious participants performing model poisoning attacks.
Similar Papers
On the Security and Privacy of Federated Learning: A Survey with Attacks, Defenses, Frameworks, Applications, and Future Directions
Cryptography and Security
Keeps shared computer learning private and safe.
PPFPL: Cross-silo Privacy-preserving Federated Prototype Learning Against Data Poisoning Attacks
Cryptography and Security
Protects private data while training smart computer programs.
Experiences Building Enterprise-Level Privacy-Preserving Federated Learning to Power AI for Science
Distributed, Parallel, and Cluster Computing
Lets AI learn from private data safely.