Boosting Gradient Leakage Attacks: Data Reconstruction in Realistic FL Settings
By: Mingyuan Fan , Fuyi Wang , Cen Chen and more
Potential Business Impact:
Steals private data from shared computer learning.
Federated learning (FL) enables collaborative model training among multiple clients without the need to expose raw data. Its ability to safeguard privacy, at the heart of FL, has recently been a hot-button debate topic. To elaborate, several studies have introduced a type of attacks known as gradient leakage attacks (GLAs), which exploit the gradients shared during training to reconstruct clients' raw data. On the flip side, some literature, however, contends no substantial privacy risk in practical FL environments due to the effectiveness of such GLAs being limited to overly relaxed conditions, such as small batch sizes and knowledge of clients' data distributions. This paper bridges this critical gap by empirically demonstrating that clients' data can still be effectively reconstructed, even within realistic FL environments. Upon revisiting GLAs, we recognize that their performance failures stem from their inability to handle the gradient matching problem. To alleviate the performance bottlenecks identified above, we develop FedLeak, which introduces two novel techniques, partial gradient matching and gradient regularization. Moreover, to evaluate the performance of FedLeak in real-world FL environments, we formulate a practical evaluation protocol grounded in a thorough review of extensive FL literature and industry practices. Under this protocol, FedLeak can still achieve high-fidelity data reconstruction, thereby underscoring the significant vulnerability in FL systems and the urgent need for more effective defense methods.
Similar Papers
Byzantine Outside, Curious Inside: Reconstructing Data Through Malicious Updates
Machine Learning (CS)
Makes private data easier to steal.
Differential Privacy: Gradient Leakage Attacks in Federated Learning Environments
Machine Learning (CS)
Protects private data when computers learn together.
On the Detectability of Active Gradient Inversion Attacks in Federated Learning
Cryptography and Security
Protects private data during computer learning.