SoK: Enhancing Cryptographic Collaborative Learning with Differential Privacy
By: Francesco Capano, Jonas Böhler, Benjamin Weggenmann
Potential Business Impact:
Keeps private data safe during computer learning.
In collaborative learning (CL), multiple parties jointly train a machine learning model on their private datasets. However, data can not be shared directly due to privacy concerns. To ensure input confidentiality, cryptographic techniques, e.g., multi-party computation (MPC), enable training on encrypted data. Yet, even securely trained models are vulnerable to inference attacks aiming to extract memorized data from model outputs. To ensure output privacy and mitigate inference attacks, differential privacy (DP) injects calibrated noise during training. While cryptography and DP offer complementary guarantees, combining them efficiently for cryptographic and differentially private CL (CPCL) is challenging. Cryptography incurs performance overheads, while DP degrades accuracy, creating a privacy-accuracy-performance trade-off that needs careful design considerations. This work systematizes the CPCL landscape. We introduce a unified framework that generalizes common phases across CPCL paradigms, and identify secure noise sampling as the foundational phase to achieve CPCL. We analyze trade-offs of different secure noise sampling techniques, noise types, and DP mechanisms discussing their implementation challenges and evaluating their accuracy and cryptographic overhead across CPCL paradigms. Additionally, we implement identified secure noise sampling options in MPC and evaluate their computation and communication costs in WAN and LAN. Finally, we propose future research directions based on identified key observations, gaps and possible enhancements in the literature.
Similar Papers
Cooperative Local Differential Privacy: Securing Time Series Data in Distributed Environments
Cryptography and Security
Keeps your personal data private when shared.
Tight and Practical Privacy Auditing for Differentially Private In-Context Learning
Cryptography and Security
Checks if AI models leak private information.
Local Layer-wise Differential Privacy in Federated Learning
Cryptography and Security
Keeps AI learning private, better than before.