Score: 0

ModShift: Model Privacy via Designed Shifts

Published: July 26, 2025 | arXiv ID: 2507.20060v1

By: Nomaan A. Kherani, Urbashi Mitra

Potential Business Impact:

Keeps computer learning private from spies.

Business Areas:
Privacy Privacy and Security

In this paper, shifts are introduced to preserve model privacy against an eavesdropper in federated learning. Model learning is treated as a parameter estimation problem. This perspective allows us to derive the Fisher Information matrix of the model updates from the shifted updates and drive them to singularity, thus posing a hard estimation problem for Eve. The shifts are securely shared with the central server to maintain model accuracy at the server and participating devices. A convergence test is proposed to detect if model updates have been tampered with and we show that our scheme passes this test. Numerical results show that our scheme achieves a higher model shift when compared to a noise injection scheme while requiring a lesser bandwidth secret channel.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)