Empirical Comparison of Membership Inference Attacks in Deep Transfer Learning
By: Yuxuan Bai , Gauri Pradhan , Marlon Tobaben and more
Potential Business Impact:
Finds best ways to check if AI learned private info.
With the emergence of powerful large-scale foundation models, the training paradigm is increasingly shifting from from-scratch training to transfer learning. This enables high utility training with small, domain-specific datasets typical in sensitive applications. Membership inference attacks (MIAs) provide an empirical estimate of the privacy leakage by machine learning models. Yet, prior assessments of MIAs against models fine-tuned with transfer learning rely on a small subset of possible attacks. We address this by comparing performance of diverse MIAs in transfer learning settings to help practitioners identify the most efficient attacks for privacy risk evaluation. We find that attack efficacy decreases with the increase in training data for score-based MIAs. We find that there is no one MIA which captures all privacy risks in models trained with transfer learning. While the Likelihood Ratio Attack (LiRA) demonstrates superior performance across most experimental scenarios, the Inverse Hessian Attack (IHA) proves to be more effective against models fine-tuned on PatchCamelyon dataset in high data regime.
Similar Papers
Empirical Comparison of Membership Inference Attacks in Deep Transfer Learning
Machine Learning (CS)
Finds ways hackers steal private info from AI.
Membership Inference Attacks fueled by Few-Short Learning to detect privacy leakage tackling data integrity
Cryptography and Security
Finds if private data was used to train AI.
Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble
Machine Learning (CS)
Finds hidden privacy leaks in smart computer programs.