Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach
By: Huazi Pan , Yanjun Zhang , Leo Yu Zhang and more
Potential Business Impact:
Makes AI learn wrong things on purpose.
Manipulation of local training data and local updates, i.e., the poisoning attack, is the main threat arising from the collaborative nature of the federated learning (FL) paradigm. Most existing poisoning attacks aim to manipulate local data/models in a way that causes denial-of-service (DoS) issues. In this paper, we introduce a novel attack method, named Federated Learning Sliding Attack (FedSA) scheme, aiming at precisely introducing the extent of poisoning in a subtle controlled manner. It operates with a predefined objective, such as reducing global model's prediction accuracy by 10%. FedSA integrates robust nonlinear control-Sliding Mode Control (SMC) theory with model poisoning attacks. It can manipulate the updates from malicious clients to drive the global model towards a compromised state, achieving this at a controlled and inconspicuous rate. Additionally, leveraging the robust control properties of FedSA allows precise control over the convergence bounds, enabling the attacker to set the global accuracy of the poisoned model to any desired level. Experimental results demonstrate that FedSA can accurately achieve a predefined global accuracy with fewer malicious clients while maintaining a high level of stealth and adjustable learning rates.
Similar Papers
Find a Scapegoat: Poisoning Membership Inference Attack and Defense to Federated Learning
Cryptography and Security
Protects private data from sneaky model hackers.
SoK: Benchmarking Poisoning Attacks and Defenses in Federated Learning
Cryptography and Security
Protects shared computer learning from bad data.
A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning
Machine Learning (CS)
Stops bad guys from messing up shared computer learning.