Score: 0

Leveraging Membership Inference Attacks for Privacy Measurement in Federated Learning for Remote Sensing Images

Published: January 8, 2026 | arXiv ID: 2601.06200v1

By: Anh-Kiet Duong , Petra Gomez-Krämer , Hoàng-Ân Lê and more

Potential Business Impact:

Protects private photos used to train AI.

Business Areas:
Intrusion Detection Information Technology, Privacy and Security

Federated Learning (FL) enables collaborative model training while keeping training data localized, allowing us to preserve privacy in various domains including remote sensing. However, recent studies show that FL models may still leak sensitive information through their outputs, motivating the need for rigorous privacy evaluation. In this paper, we leverage membership inference attacks (MIA) as a quantitative privacy measurement framework for FL applied to remote sensing image classification. We evaluate multiple black-box MIA techniques, including entropy-based attacks, modified entropy attacks, and the likelihood ratio attack, across different FL algorithms and communication strategies. Experiments conducted on two public scene classification datasets demonstrate that MIA effectively reveals privacy leakage not captured by accuracy alone. Our results show that communication-efficient FL strategies reduce MIA success rates while maintaining competitive performance. These findings confirm MIA as a practical metric and highlight the importance of integrating privacy measurement into FL system design for remote sensing applications.

Country of Origin
🇫🇷 France

Page Count
5 pages

Category
Computer Science:
Cryptography and Security