Empirical Comparison of Membership Inference Attacks in Deep Transfer Learning
By: Yuxuan Bai , Gauri Pradhan , Marlon Tobaben and more
Potential Business Impact:
Finds ways hackers steal private info from AI.
With the emergence of powerful large-scale foundation models, the training paradigm is increasingly shifting from from-scratch training to transfer learning. This enables high utility training with small, domain-specific datasets typical in sensitive applications.Membership inference attacks (MIAs) provide an empirical estimate of the privacy leakage by machine learning models. Yet, prior assessments of MIAs against models fine-tuned with transfer learning rely on a small subset of possible attacks. We address this by comparing performance of diverse MIAs in transfer learning settings to help practitioners identify the most efficient attacks for privacy risk evaluation. We find that attack efficacy decreases with the increase in training data for score-based MIAs. We find that there is no one MIA which captures all privacy risks in models trained with transfer learning. While the Likelihood Ratio Attack (LiRA) demonstrates superior performance across most experimental scenarios, the Inverse Hessian Attack (IHA) proves to be more effective against models fine-tuned on PatchCamelyon dataset in high data regime.
Similar Papers
Empirical Comparison of Membership Inference Attacks in Deep Transfer Learning
Machine Learning (CS)
Finds best ways to check if AI learned private info.
Membership Inference Attacks fueled by Few-Short Learning to detect privacy leakage tackling data integrity
Cryptography and Security
Finds if private data was used to train AI.
Membership Inference Attacks Beyond Overfitting
Cryptography and Security
Protects private data used to train smart programs.