FedEM: A Privacy-Preserving Framework for Concurrent Utility Preservation in Federated Learning
By: Mingcong Xu , Xiaojin Zhang , Wei Chen and more
Potential Business Impact:
Keeps private data safe while computers learn together.
Federated Learning (FL) enables collaborative training of models across distributed clients without sharing local data, addressing privacy concerns in decentralized systems. However, the gradient-sharing process exposes private data to potential leakage, compromising FL's privacy guarantees in real-world applications. To address this issue, we propose Federated Error Minimization (FedEM), a novel algorithm that incorporates controlled perturbations through adaptive noise injection. This mechanism effectively mitigates gradient leakage attacks while maintaining model performance. Experimental results on benchmark datasets demonstrate that FedEM significantly reduces privacy risks and preserves model accuracy, achieving a robust balance between privacy protection and utility preservation.
Similar Papers
FedRE: Robust and Effective Federated Learning with Privacy Preference
Machine Learning (CS)
Keeps private data safe while training computers.
Secure and Privacy-Preserving Federated Learning for Next-Generation Underground Mine Safety
Cryptography and Security
Keeps mine safety data private while improving alerts.
Towards Privacy-Preserving Data-Driven Education: The Potential of Federated Learning
Machine Learning (CS)
Keeps student data private while still learning.