Cascading and Proxy Membership Inference Attacks
By: Yuntao Du , Jiacheng Li , Yuetian Chen and more
Potential Business Impact:
Protects private data from being guessed by AI.
A Membership Inference Attack (MIA) assesses how much a trained machine learning model reveals about its training data by determining whether specific query instances were included in the dataset. We classify existing MIAs into adaptive or non-adaptive, depending on whether the adversary is allowed to train shadow models on membership queries. In the adaptive setting, where the adversary can train shadow models after accessing query instances, we highlight the importance of exploiting membership dependencies between instances and propose an attack-agnostic framework called Cascading Membership Inference Attack (CMIA), which incorporates membership dependencies via conditional shadow training to boost membership inference performance. In the non-adaptive setting, where the adversary is restricted to training shadow models before obtaining membership queries, we introduce Proxy Membership Inference Attack (PMIA). PMIA employs a proxy selection strategy that identifies samples with similar behaviors to the query instance and uses their behaviors in shadow models to perform a membership posterior odds test for membership inference. We provide theoretical analyses for both attacks, and extensive experimental results demonstrate that CMIA and PMIA substantially outperform existing MIAs in both settings, particularly in the low false-positive regime, which is crucial for evaluating privacy risks.
Similar Papers
Cascading and Proxy Membership Inference Attacks
Cryptography and Security
Protects private data from being guessed by AI.
Imitative Membership Inference Attack
Cryptography and Security
Finds if private data was used to train AI.
Membership Inference Attacks Beyond Overfitting
Cryptography and Security
Protects private data used to train smart programs.