SMOTE and Mirrors: Exposing Privacy Leakage from Synthetic Minority Oversampling
By: Georgi Ganev , Reza Nazari , Rees Davison and more
Potential Business Impact:
Exposes private information in fake data.
The Synthetic Minority Over-sampling Technique (SMOTE) is one of the most widely used methods for addressing class imbalance and generating synthetic data. Despite its popularity, little attention has been paid to its privacy implications; yet, it is used in the wild in many privacy-sensitive applications. In this work, we conduct the first systematic study of privacy leakage in SMOTE: We begin by showing that prevailing evaluation practices, i.e., naive distinguishing and distance-to-closest-record metrics, completely fail to detect any leakage and that membership inference attacks (MIAs) can be instantiated with high accuracy. Then, by exploiting SMOTE's geometric properties, we build two novel attacks with very limited assumptions: DistinSMOTE, which perfectly distinguishes real from synthetic records in augmented datasets, and ReconSMOTE, which reconstructs real minority records from synthetic datasets with perfect precision and recall approaching one under realistic imbalance ratios. We also provide theoretical guarantees for both attacks. Experiments on eight standard imbalanced datasets confirm the practicality and effectiveness of these attacks. Overall, our work reveals that SMOTE is inherently non-private and disproportionately exposes minority records, highlighting the need to reconsider its use in privacy-sensitive applications.
Similar Papers
SMOTE-DP: Improving Privacy-Utility Tradeoff with Synthetic Data
Machine Learning (CS)
Makes private data useful without losing secrets.
Concentration and excess risk bounds for imbalanced classification with synthetic oversampling
Machine Learning (Stat)
Helps computers learn better from unfair data.
Simplicial SMOTE: Oversampling Solution to the Imbalanced Learning Problem
Machine Learning (CS)
Makes computer learning fairer with more examples.