Disparate Privacy Vulnerability: Targeted Attribute Inference Attacks and Defenses
By: Ehsanul Kabir, Lucas Craig, Shagufta Mehnaz
Potential Business Impact:
Protects private data from sneaky computer guesses.
As machine learning (ML) technologies become more prevalent in privacy-sensitive areas like healthcare and finance, eventually incorporating sensitive information in building data-driven algorithms, it is vital to scrutinize whether these data face any privacy leakage risks. One potential threat arises from an adversary querying trained models using the public, non-sensitive attributes of entities in the training data to infer their private, sensitive attributes, a technique known as the attribute inference attack. This attack is particularly deceptive because, while it may perform poorly in predicting sensitive attributes across the entire dataset, it excels at predicting the sensitive attributes of records from a few vulnerable groups, a phenomenon known as disparate vulnerability. This paper illustrates that an adversary can take advantage of this disparity to carry out a series of new attacks, showcasing a threat level beyond previous imagination. We first develop a novel inference attack called the disparity inference attack, which targets the identification of high-risk groups within the dataset. We then introduce two targeted variations of the attribute inference attack that can identify and exploit a vulnerable subset of the training data, marking the first instances of targeted attacks in this category, achieving significantly higher accuracy than untargeted versions. We are also the first to introduce a novel and effective disparity mitigation technique that simultaneously preserves model performance and prevents any risk of targeted attacks.
Similar Papers
Towards Better Attribute Inference Vulnerability Measures
Cryptography and Security
Protects private info while keeping data useful.
How Worrying Are Privacy Attacks Against Machine Learning?
Cryptography and Security
Protects personal information used to train AI.
Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble
Machine Learning (CS)
Finds hidden privacy leaks in smart computer programs.