Score: 0

Disparate Privacy Vulnerability: Targeted Attribute Inference Attacks and Defenses

Published: April 5, 2025 | arXiv ID: 2504.04033v1

By: Ehsanul Kabir, Lucas Craig, Shagufta Mehnaz

Potential Business Impact:

Protects private data from sneaky computer guesses.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

As machine learning (ML) technologies become more prevalent in privacy-sensitive areas like healthcare and finance, eventually incorporating sensitive information in building data-driven algorithms, it is vital to scrutinize whether these data face any privacy leakage risks. One potential threat arises from an adversary querying trained models using the public, non-sensitive attributes of entities in the training data to infer their private, sensitive attributes, a technique known as the attribute inference attack. This attack is particularly deceptive because, while it may perform poorly in predicting sensitive attributes across the entire dataset, it excels at predicting the sensitive attributes of records from a few vulnerable groups, a phenomenon known as disparate vulnerability. This paper illustrates that an adversary can take advantage of this disparity to carry out a series of new attacks, showcasing a threat level beyond previous imagination. We first develop a novel inference attack called the disparity inference attack, which targets the identification of high-risk groups within the dataset. We then introduce two targeted variations of the attribute inference attack that can identify and exploit a vulnerable subset of the training data, marking the first instances of targeted attacks in this category, achieving significantly higher accuracy than untargeted versions. We are also the first to introduce a novel and effective disparity mitigation technique that simultaneously preserves model performance and prevents any risk of targeted attacks.

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)