Find a Scapegoat: Poisoning Membership Inference Attack and Defense to Federated Learning
By: Wenjin Mo , Zhiyuan Li , Minghong Fang and more
Potential Business Impact:
Protects private data from sneaky model hackers.
Federated learning (FL) allows multiple clients to collaboratively train a global machine learning model with coordination from a central server, without needing to share their raw data. This approach is particularly appealing in the era of privacy regulations like the GDPR, leading many prominent companies to adopt it. However, FL's distributed nature makes it susceptible to poisoning attacks, where malicious clients, controlled by an attacker, send harmful data to compromise the model. Most existing poisoning attacks in FL aim to degrade the model's integrity, such as reducing its accuracy, with limited attention to privacy concerns from these attacks. In this study, we introduce FedPoisonMIA, a novel poisoning membership inference attack targeting FL. FedPoisonMIA involves malicious clients crafting local model updates to infer membership information. Additionally, we propose a robust defense mechanism to mitigate the impact of FedPoisonMIA attacks. Extensive experiments across various datasets demonstrate the attack's effectiveness, while our defense approach reduces its impact to a degree.
Similar Papers
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach
Machine Learning (CS)
Makes AI learn wrong things on purpose.
SoK: Benchmarking Poisoning Attacks and Defenses in Federated Learning
Cryptography and Security
Protects shared computer learning from bad data.
United We Defend: Collaborative Membership Inference Defenses in Federated Learning
Cryptography and Security
Protects private data from being guessed by others.