Fairness-Constrained Optimization Attack in Federated Learning
By: Harsh Kasyap , Minghong Fang , Zhuqing Liu and more
Potential Business Impact:
Makes AI unfairly biased, even when it seems accurate.
Federated learning (FL) is a privacy-preserving machine learning technique that facilitates collaboration among participants across demographics. FL enables model sharing, while restricting the movement of data. Since FL provides participants with independence over their training data, it becomes susceptible to poisoning attacks. Such collaboration also propagates bias among the participants, even unintentionally, due to different data distribution or historical bias present in the data. This paper proposes an intentional fairness attack, where a client maliciously sends a biased model, by increasing the fairness loss while training, even considering homogeneous data distribution. The fairness loss is calculated by solving an optimization problem for fairness metrics such as demographic parity and equalized odds. The attack is insidious and hard to detect, as it maintains global accuracy even after increasing the bias. We evaluate our attack against the state-of-the-art Byzantine-robust and fairness-aware aggregation schemes over different datasets, in various settings. The empirical results demonstrate the attack efficacy by increasing the bias up to 90\%, even in the presence of a single malicious client in the FL system.
Similar Papers
Fairness in Federated Learning: Trends, Challenges, and Opportunities
Machine Learning (CS)
Makes AI learn fairly from everyone's private data.
Robust Federated Learning under Adversarial Attacks via Loss-Based Client Clustering
Machine Learning (CS)
Protects smart learning from bad data.
Robust Federated Learning under Adversarial Attacks via Loss-Based Client Clustering
Machine Learning (CS)
Keeps AI learning safe from bad data.