Score: 0

Find a Scapegoat: Poisoning Membership Inference Attack and Defense to Federated Learning

Published: July 1, 2025 | arXiv ID: 2507.00423v1

By: Wenjin Mo , Zhiyuan Li , Minghong Fang and more

Potential Business Impact:

Protects private data from sneaky model hackers.

Business Areas:
Intrusion Detection Information Technology, Privacy and Security

Federated learning (FL) allows multiple clients to collaboratively train a global machine learning model with coordination from a central server, without needing to share their raw data. This approach is particularly appealing in the era of privacy regulations like the GDPR, leading many prominent companies to adopt it. However, FL's distributed nature makes it susceptible to poisoning attacks, where malicious clients, controlled by an attacker, send harmful data to compromise the model. Most existing poisoning attacks in FL aim to degrade the model's integrity, such as reducing its accuracy, with limited attention to privacy concerns from these attacks. In this study, we introduce FedPoisonMIA, a novel poisoning membership inference attack targeting FL. FedPoisonMIA involves malicious clients crafting local model updates to infer membership information. Additionally, we propose a robust defense mechanism to mitigate the impact of FedPoisonMIA attacks. Extensive experiments across various datasets demonstrate the attack's effectiveness, while our defense approach reduces its impact to a degree.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
21 pages

Category
Computer Science:
Cryptography and Security