ModShift: Model Privacy via Designed Shifts
By: Nomaan A. Kherani, Urbashi Mitra
Potential Business Impact:
Keeps computer learning private from spies.
In this paper, shifts are introduced to preserve model privacy against an eavesdropper in federated learning. Model learning is treated as a parameter estimation problem. This perspective allows us to derive the Fisher Information matrix of the model updates from the shifted updates and drive them to singularity, thus posing a hard estimation problem for Eve. The shifts are securely shared with the central server to maintain model accuracy at the server and participating devices. A convergence test is proposed to detect if model updates have been tampered with and we show that our scheme passes this test. Numerical results show that our scheme achieves a higher model shift when compared to a noise injection scheme while requiring a lesser bandwidth secret channel.
Similar Papers
Decentralized Privacy-Preserving Federal Learning of Computer Vision Models on Edge Devices
Cryptography and Security
Keeps your private data safe when computers learn together.
Enhancing Model Privacy in Federated Learning with Random Masking and Quantization
Machine Learning (CS)
Protects secret AI models and data during training.
On Model Protection in Federated Learning against Eavesdropping Attacks
Cryptography and Security
Keeps secret computer learning from being spied on.