Score: 0

Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach

Published: May 22, 2025 | arXiv ID: 2505.16403v2

By: Huazi Pan , Yanjun Zhang , Leo Yu Zhang and more

Potential Business Impact:

Makes AI learn wrong things on purpose.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Manipulation of local training data and local updates, i.e., the poisoning attack, is the main threat arising from the collaborative nature of the federated learning (FL) paradigm. Most existing poisoning attacks aim to manipulate local data/models in a way that causes denial-of-service (DoS) issues. In this paper, we introduce a novel attack method, named Federated Learning Sliding Attack (FedSA) scheme, aiming at precisely introducing the extent of poisoning in a subtle controlled manner. It operates with a predefined objective, such as reducing global model's prediction accuracy by 10%. FedSA integrates robust nonlinear control-Sliding Mode Control (SMC) theory with model poisoning attacks. It can manipulate the updates from malicious clients to drive the global model towards a compromised state, achieving this at a controlled and inconspicuous rate. Additionally, leveraging the robust control properties of FedSA allows precise control over the convergence bounds, enabling the attacker to set the global accuracy of the poisoned model to any desired level. Experimental results demonstrate that FedSA can accurately achieve a predefined global accuracy with fewer malicious clients while maintaining a high level of stealth and adjustable learning rates.

Country of Origin
šŸ‡¦šŸ‡ŗ Australia

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)