FAROS: Robust Federated Learning with Adaptive Scaling against Backdoor Attacks
By: Chenyu Hu , Qiming Hu , Sinan Chen and more
Potential Business Impact:
Stops sneaky computer tricks in shared learning.
Federated Learning (FL) enables multiple clients to collaboratively train a shared model without exposing local data. However, backdoor attacks pose a significant threat to FL. These attacks aim to implant a stealthy trigger into the global model, causing it to mislead on inputs that possess a specific trigger while functioning normally on benign data. Although pre-aggregation detection is a main defense direction, existing state-of-the-art defenses often rely on fixed defense parameters. This reliance makes them vulnerable to single-point-of-failure risks, rendering them less effective against sophisticated attackers. To address these limitations, we propose FAROS, an enhanced FL framework that incorporates Adaptive Differential Scaling (ADS) and Robust Core-set Computing (RCC). The ADS mechanism adjusts the defense's sensitivity dynamically, based on the dispersion of uploaded gradients by clients in each round. This allows it to counter attackers who strategically shift between stealthiness and effectiveness. Furthermore, the RCC effectively mitigates the risk of single-point failure by computing the centroid of a core set comprising clients with the highest confidence. We conducted extensive experiments across various datasets, models, and attack scenarios. The results demonstrate that our method outperforms current defenses in both attack success rate and main task accuracy.
Similar Papers
Watch Out for the Lifespan: Evaluating Backdoor Attacks Against Federated Model Adaptation
Machine Learning (CS)
Makes AI safer from hidden bad code.
Heterogeneity-Oblivious Robust Federated Learning
Machine Learning (CS)
Protects AI learning from bad data.
FL-PLAS: Federated Learning with Partial Layer Aggregation for Backdoor Defense Against High-Ratio Malicious Clients
Cryptography and Security
Protects shared computer learning from sneaky attacks.