Enhancing Model Privacy in Federated Learning with Random Masking and Quantization
By: Zhibo Xu , Jianhao Zhu , Jingwen Xu and more
Potential Business Impact:
Protects secret AI models and data during training.
The primary goal of traditional federated learning is to protect data privacy by enabling distributed edge devices to collaboratively train a shared global model while keeping raw data decentralized at local clients. The rise of large language models (LLMs) has introduced new challenges in distributed systems, as their substantial computational requirements and the need for specialized expertise raise critical concerns about protecting intellectual property (IP). This highlights the need for a federated learning approach that can safeguard both sensitive data and proprietary models. To tackle this challenge, we propose FedQSN, a federated learning approach that leverages random masking to obscure a subnetwork of model parameters and applies quantization to the remaining parameters. Consequently, the server transmits only a privacy-preserving proxy of the global model to clients during each communication round, thus enhancing the model's confidentiality. Experimental results across various models and tasks demonstrate that our approach not only maintains strong model performance in federated learning settings but also achieves enhanced protection of model parameters compared to baseline methods.
Similar Papers
Enhancing Model Privacy in Federated Learning with Random Masking and Quantization
Machine Learning (CS)
Keeps computer learning private while still working well.
Privacy-Preserving Quantized Federated Learning with Diverse Precision
Machine Learning (CS)
Keeps private data safe while learning better.
Decentralized Privacy-Preserving Federal Learning of Computer Vision Models on Edge Devices
Cryptography and Security
Keeps your private data safe when computers learn together.