Imitative Membership Inference Attack
By: Yuntao Du , Yuetian Chen , Hanshen Xiao and more
Potential Business Impact:
Finds if private data was used to train AI.
A Membership Inference Attack (MIA) assesses how much a target machine learning model reveals about its training data by determining whether specific query instances were part of the training set. State-of-the-art MIAs rely on training hundreds of shadow models that are independent of the target model, leading to significant computational overhead. In this paper, we introduce Imitative Membership Inference Attack (IMIA), which employs a novel imitative training technique to strategically construct a small number of target-informed imitative models that closely replicate the target model's behavior for inference. Extensive experimental results demonstrate that IMIA substantially outperforms existing MIAs in various attack settings while only requiring less than 5% of the computational cost of state-of-the-art approaches.
Similar Papers
Membership Inference Attacks Beyond Overfitting
Cryptography and Security
Protects private data used to train smart programs.
ImpMIA: Leveraging Implicit Bias for Membership Inference Attack under Realistic Scenarios
Machine Learning (CS)
Finds private data used to train AI.
Privacy Leaks by Adversaries: Adversarial Iterations for Membership Inference Attack
Cryptography and Security
Finds if your private data trained AI.