IPBA: Imperceptible Perturbation Backdoor Attack in Federated Self-Supervised Learning
By: Jiayao Wang , Yang Song , Zhendong Zhao and more
Potential Business Impact:
Makes AI models secretly learn wrong things.
Federated self-supervised learning (FSSL) combines the advantages of decentralized modeling and unlabeled representation learning, serving as a cutting-edge paradigm with strong potential for scalability and privacy preservation. Although FSSL has garnered increasing attention, research indicates that it remains vulnerable to backdoor attacks. Existing methods generally rely on visually obvious triggers, which makes it difficult to meet the requirements for stealth and practicality in real-world deployment. In this paper, we propose an imperceptible and effective backdoor attack method against FSSL, called IPBA. Our empirical study reveals that existing imperceptible triggers face a series of challenges in FSSL, particularly limited transferability, feature entanglement with augmented samples, and out-of-distribution properties. These issues collectively undermine the effectiveness and stealthiness of traditional backdoor attacks in FSSL. To overcome these challenges, IPBA decouples the feature distributions of backdoor and augmented samples, and introduces Sliced-Wasserstein distance to mitigate the out-of-distribution properties of backdoor samples, thereby optimizing the trigger generation process. Our experimental results on several FSSL scenarios and datasets show that IPBA significantly outperforms existing backdoor attack methods in performance and exhibits strong robustness under various defense mechanisms.
Similar Papers
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks
Cryptography and Security
Hides secret messages in pictures to trick computers.
Unveiling Hidden Threats: Using Fractal Triggers to Boost Stealthiness of Distributed Backdoor Attacks in Federated Learning
Cryptography and Security
Makes computer learning attacks harder to find.
BAPFL: Exploring Backdoor Attacks Against Prototype-based Federated Learning
Machine Learning (CS)
Makes AI models safer from sneaky attacks.