Strategic Incentivization for Locally Differentially Private Federated Learning
By: Yashwant Krishna Pagoti, Arunesh Sinha, Shamik Sural
Potential Business Impact:
Helps protect privacy without hurting computer learning.
In Federated Learning (FL), multiple clients jointly train a machine learning model by sharing gradient information, instead of raw data, with a server over multiple rounds. To address the possibility of information leakage in spite of sharing only the gradients, Local Differential Privacy (LDP) is often used. In LDP, clients add a selective amount of noise to the gradients before sending the same to the server. Although such noise addition protects the privacy of clients, it leads to a degradation in global model accuracy. In this paper, we model this privacy-accuracy trade-off as a game, where the sever incentivizes the clients to add a lower degree of noise for achieving higher accuracy, while the clients attempt to preserve their privacy at the cost of a potential loss in accuracy. A token based incentivization mechanism is introduced in which the quantum of tokens credited to a client in an FL round is a function of the degree of perturbation of its gradients. The client can later access a newly updated global model only after acquiring enough tokens, which are to be deducted from its balance. We identify the players, their actions and payoff, and perform a strategic analysis of the game. Extensive experiments were carried out to study the impact of different parameters.
Similar Papers
Local Differential Privacy for Federated Learning with Fixed Memory Usage and Per-Client Privacy
Cryptography and Security
Keeps private data safe while training AI.
Mitigating Privacy-Utility Trade-off in Decentralized Federated Learning via $f$-Differential Privacy
Machine Learning (CS)
Keeps private data safe when learning together.
Privacy-Preserving Decentralized Federated Learning via Explainable Adaptive Differential Privacy
Cryptography and Security
Keeps private data safe while learning.