Differential Privacy in Federated Learning: Mitigating Inference Attacks with Randomized Response
By: Ozer Ozturk , Busra Buyuktanir , Gozde Karatas Baydogmus and more
Potential Business Impact:
Keeps your private data safe while training AI.
Machine learning models used for distributed architectures consisting of servers and clients require large amounts of data to achieve high accuracy. Data obtained from clients are collected on a central server for model training. However, storing data on a central server raises concerns about security and privacy. To address this issue, a federated learning architecture has been proposed. In federated learning, each client trains a local model using its own data. The trained models are periodically transmitted to the central server. The server then combines the received models using federated aggregation algorithms to obtain a global model. This global model is distributed back to the clients, and the process continues in a cyclical manner. Although preventing data from leaving the clients enhances security, certain concerns still remain. Attackers can perform inference attacks on the obtained models to approximate the training dataset, potentially causing data leakage. In this study, differential privacy was applied to address the aforementioned security vulnerability, and a performance analysis was conducted. The Data-Unaware Classification Based on Association (duCBA) algorithm was used as the federated aggregation method. Differential privacy was implemented on the data using the Randomized Response technique, and the trade-off between security and performance was examined under different epsilon values. As the epsilon value decreased, the model accuracy declined, and class prediction imbalances were observed. This indicates that higher levels of privacy do not always lead to practical outcomes and that the balance between security and performance must be carefully considered.
Similar Papers
Privacy-Preserving Decentralized Federated Learning via Explainable Adaptive Differential Privacy
Cryptography and Security
Keeps private data safe while learning.
On Model Protection in Federated Learning against Eavesdropping Attacks
Cryptography and Security
Keeps secret computer learning from being spied on.
Differential Privacy Personalized Federated Learning Based on Dynamically Sparsified Client Updates
Machine Learning (CS)
Keeps your private data safe during AI training.