Score: 1

On the Detectability of Active Gradient Inversion Attacks in Federated Learning

Published: November 13, 2025 | arXiv ID: 2511.10502v1

By: Vincenzo Carletti , Pasquale Foggia , Carlo Mazzocca and more

Potential Business Impact:

Protects private data during computer learning.

Business Areas:
Fraud Detection Financial Services, Payments, Privacy and Security

One of the key advantages of Federated Learning (FL) is its ability to collaboratively train a Machine Learning (ML) model while keeping clients' data on-site. However, this can create a false sense of security. Despite not sharing private data increases the overall privacy, prior studies have shown that gradients exchanged during the FL training remain vulnerable to Gradient Inversion Attacks (GIAs). These attacks allow reconstructing the clients' local data, breaking the privacy promise of FL. GIAs can be launched by either a passive or an active server. In the latter case, a malicious server manipulates the global model to facilitate data reconstruction. While effective, earlier attacks falling under this category have been demonstrated to be detectable by clients, limiting their real-world applicability. Recently, novel active GIAs have emerged, claiming to be far stealthier than previous approaches. This work provides the first comprehensive analysis of these claims, investigating four state-of-the-art GIAs. We propose novel lightweight client-side detection techniques, based on statistically improbable weight structures and anomalous loss and gradient dynamics. Extensive evaluation across several configurations demonstrates that our methods enable clients to effectively detect active GIAs without any modifications to the FL training protocol.

Country of Origin
🇮🇹 Italy

Page Count
18 pages

Category
Computer Science:
Cryptography and Security