Leveraging Membership Inference Attacks for Privacy Measurement in Federated Learning for Remote Sensing Images
By: Anh-Kiet Duong , Petra Gomez-Krämer , Hoàng-Ân Lê and more
Potential Business Impact:
Protects private photos used to train AI.
Federated Learning (FL) enables collaborative model training while keeping training data localized, allowing us to preserve privacy in various domains including remote sensing. However, recent studies show that FL models may still leak sensitive information through their outputs, motivating the need for rigorous privacy evaluation. In this paper, we leverage membership inference attacks (MIA) as a quantitative privacy measurement framework for FL applied to remote sensing image classification. We evaluate multiple black-box MIA techniques, including entropy-based attacks, modified entropy attacks, and the likelihood ratio attack, across different FL algorithms and communication strategies. Experiments conducted on two public scene classification datasets demonstrate that MIA effectively reveals privacy leakage not captured by accuracy alone. Our results show that communication-efficient FL strategies reduce MIA success rates while maintaining competitive performance. These findings confirm MIA as a practical metric and highlight the importance of integrating privacy measurement into FL system design for remote sensing applications.
Similar Papers
Membership Inference Attacks fueled by Few-Short Learning to detect privacy leakage tackling data integrity
Cryptography and Security
Finds if private data was used to train AI.
Securing Genomic Data Against Inference Attacks in Federated Learning Environments
Cryptography and Security
Protects secret health codes from hackers.
Leveraging Soft Prompts for Privacy Attacks in Federated Prompt Tuning
Machine Learning (CS)
Steals private data from AI learning systems.