Stealth by Conformity: Evading Robust Aggregation through Adaptive Poisoning
By: Ryan McGaughey, Jesus Martinez del Rincon, Ihsen Alouani
Potential Business Impact:
Tricks AI into learning wrong things secretly.
Federated Learning (FL) is a distributed learning paradigm designed to address privacy concerns. However, FL is vulnerable to poisoning attacks, where Byzantine clients compromise the integrity of the global model by submitting malicious updates. Robust aggregation methods have been widely adopted to mitigate such threats, relying on the core assumption that malicious updates are inherently out-of-distribution and can therefore be identified and excluded before aggregating client updates. In this paper, we challenge this underlying assumption by showing that a model can be poisoned while keeping malicious updates within the main distribution. We propose Chameleon Poisoning (CHAMP), an adaptive and evasive poisoning strategy that exploits side-channel feedback from the aggregation process to guide the attack. Specifically, the adversary continuously infers whether its malicious contribution has been incorporated into the global model and adapts accordingly. This enables a dynamic adjustment of the local loss function, balancing a malicious component with a camouflaging component, thereby increasing the effectiveness of the poisoning while evading robust aggregation defenses. CHAMP enables more effective and evasive poisoning, highlighting a fundamental limitation of existing robust aggregation defenses and underscoring the need for new strategies to secure federated learning against sophisticated adversaries. Our approach is evaluated in two datasets reaching an average increase of 47.07% in attack success rate against nine robust aggregation defenses.
Similar Papers
On Evaluating the Poisoning Robustness of Federated Learning under Local Differential Privacy
Cryptography and Security
Makes private computer learning safer from bad guys.
Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning
Cryptography and Security
Protects smart learning from bad data.
FLAegis: A Two-Layer Defense Framework for Federated Learning Against Poisoning Attacks
Machine Learning (CS)
Stops bad guys from cheating AI training.