On the Detectability of Active Gradient Inversion Attacks in Federated Learning
By: Vincenzo Carletti , Pasquale Foggia , Carlo Mazzocca and more
Potential Business Impact:
Protects private data during computer learning.
One of the key advantages of Federated Learning (FL) is its ability to collaboratively train a Machine Learning (ML) model while keeping clients' data on-site. However, this can create a false sense of security. Despite not sharing private data increases the overall privacy, prior studies have shown that gradients exchanged during the FL training remain vulnerable to Gradient Inversion Attacks (GIAs). These attacks allow reconstructing the clients' local data, breaking the privacy promise of FL. GIAs can be launched by either a passive or an active server. In the latter case, a malicious server manipulates the global model to facilitate data reconstruction. While effective, earlier attacks falling under this category have been demonstrated to be detectable by clients, limiting their real-world applicability. Recently, novel active GIAs have emerged, claiming to be far stealthier than previous approaches. This work provides the first comprehensive analysis of these claims, investigating four state-of-the-art GIAs. We propose novel lightweight client-side detection techniques, based on statistically improbable weight structures and anomalous loss and gradient dynamics. Extensive evaluation across several configurations demonstrates that our methods enable clients to effectively detect active GIAs without any modifications to the FL training protocol.
Similar Papers
Exploring the Vulnerabilities of Federated Learning: A Deep Dive into Gradient Inversion Attacks
Cryptography and Security
Protects private info from sneaky computer learning.
GUIDE: Enhancing Gradient Inversion Attacks in Federated Learning with Denoising Models
Cryptography and Security
Lets hackers steal private pictures from training data.
GUIDE: Enhancing Gradient Inversion Attacks in Federated Learning with Denoising Models
Cryptography and Security
Lets hackers steal your private photos from AI.