Theoretical Convergence of SMOTE-Generated Samples
By: Firuz Kamalov, Hana Sulieman, Witold Pedrycz
Potential Business Impact:
Makes AI learn better from unfair data.
Imbalanced data affects a wide range of machine learning applications, from healthcare to network security. As SMOTE is one of the most popular approaches to addressing this issue, it is imperative to validate it not only empirically but also theoretically. In this paper, we provide a rigorous theoretical analysis of SMOTE's convergence properties. Concretely, we prove that the synthetic random variable Z converges in probability to the underlying random variable X. We further prove a stronger convergence in mean when X is compact. Finally, we show that lower values of the nearest neighbor rank lead to faster convergence offering actionable guidance to practitioners. The theoretical results are supported by numerical experiments using both real-life and synthetic data. Our work provides a foundational understanding that enhances data augmentation techniques beyond imbalanced data scenarios.
Similar Papers
Concentration and excess risk bounds for imbalanced classification with synthetic oversampling
Machine Learning (Stat)
Helps computers learn better from unfair data.
SMOTE and Mirrors: Exposing Privacy Leakage from Synthetic Minority Oversampling
Cryptography and Security
Exposes private information in fake data.
Simplicial SMOTE: Oversampling Solution to the Imbalanced Learning Problem
Machine Learning (CS)
Makes computer learning fairer with more examples.