On the Effectiveness of Membership Inference in Targeted Data Extraction from Large Language Models
By: Ali Al Sahili, Ali Chehab, Razane Tajeddine
Potential Business Impact:
Stops AI from accidentally sharing private secrets.
Large Language Models (LLMs) are prone to memorizing training data, which poses serious privacy risks. Two of the most prominent concerns are training data extraction and Membership Inference Attacks (MIAs). Prior research has shown that these threats are interconnected: adversaries can extract training data from an LLM by querying the model to generate a large volume of text and subsequently applying MIAs to verify whether a particular data point was included in the training set. In this study, we integrate multiple MIA techniques into the data extraction pipeline to systematically benchmark their effectiveness. We then compare their performance in this integrated setting against results from conventional MIA benchmarks, allowing us to evaluate their practical utility in real-world extraction scenarios.
Similar Papers
Membership Inference Attacks on Large-Scale Models: A Survey
Machine Learning (CS)
Finds if your private info trained AI.
Lost in Modality: Evaluating the Effectiveness of Text-Based Membership Inference Attacks on Large Multimodal Models
Cryptography and Security
Finds if private images were used to train AI.
Membership Inference Attacks on LLM-based Recommender Systems
Information Retrieval
Finds if your private data is used in recommendations.