FedSDP: Explainable Differential Privacy in Federated Learning via Shapley Values
By: Yunbo Li, Jiaping Gui, Yue Wu
Potential Business Impact:
Protects secrets while training smart computer programs.
Federated learning (FL) enables participants to store data locally while collaborating in training, yet it remains vulnerable to privacy attacks, such as data reconstruction. Existing differential privacy (DP) technologies inject noise dynamically into the training process to mitigate the impact of excessive noise. However, this dynamic scheduling is often grounded in factors indirectly related to privacy, making it difficult to clearly explain the intricate relationship between dynamic noise adjustments and privacy requirements. To address this issue, we propose FedSDP, a novel and explainable DP-based privacy protection mechanism that guides noise injection based on privacy contribution. Specifically, FedSDP leverages Shapley values to assess the contribution of private attributes to local model training and dynamically adjusts the amount of noise injected accordingly. By providing theoretical insights into the injection of varying scales of noise into local training, FedSDP enhances interpretability. Extensive experiments demonstrate that FedSDP can achieve a superior balance between privacy preservation and model performance, surpassing state-of-the-art (SOTA) solutions.
Similar Papers
Privacy-Preserving Decentralized Federated Learning via Explainable Adaptive Differential Privacy
Cryptography and Security
Keeps private data safe while learning.
Mitigating Privacy-Utility Trade-off in Decentralized Federated Learning via $f$-Differential Privacy
Machine Learning (CS)
Keeps private data safe when learning together.
Differential Privacy Personalized Federated Learning Based on Dynamically Sparsified Client Updates
Machine Learning (CS)
Keeps your private data safe during AI training.