Unveiling Hidden Threats: Using Fractal Triggers to Boost Stealthiness of Distributed Backdoor Attacks in Federated Learning
By: Jian Wang, Hong Shen, Chan-Tong Lam
Potential Business Impact:
Makes computer learning attacks harder to find.
Traditional distributed backdoor attacks (DBA) in federated learning improve stealthiness by decomposing global triggers into sub-triggers, which however requires more poisoned data to maintian the attck strength and hence increases the exposure risk. To overcome this defect, This paper proposes a novel method, namely Fractal-Triggerred Distributed Backdoor Attack (FTDBA), which leverages the self-similarity of fractals to enhance the feature strength of sub-triggers and hence significantly reduce the required poisoning volume for the same attack strength. To address the detectability of fractal structures in the frequency and gradient domains, we introduce a dynamic angular perturbation mechanism that adaptively adjusts perturbation intensity across the training phases to balance efficiency and stealthiness. Experiments show that FTDBA achieves a 92.3\% attack success rate with only 62.4\% of the poisoning volume required by traditional DBA methods, while reducing the detection rate by 22.8\% and KL divergence by 41.2\%. This study presents a low-exposure, high-efficiency paradigm for federated backdoor attacks and expands the application of fractal features in adversarial sample generation.
Similar Papers
IPBA: Imperceptible Perturbation Backdoor Attack in Federated Self-Supervised Learning
Cryptography and Security
Makes AI models secretly learn wrong things.
Enhancing the Effectiveness and Durability of Backdoor Attacks in Federated Learning through Maximizing Task Distinction
Machine Learning (CS)
Makes secret computer tricks harder to find.
BDFirewall: Towards Effective and Expeditiously Black-Box Backdoor Defense in MLaaS
Cryptography and Security
Protects smart programs from secret sabotage.