Score: 0

Theoretical Convergence of SMOTE-Generated Samples

Published: January 5, 2026 | arXiv ID: 2601.01927v1

By: Firuz Kamalov, Hana Sulieman, Witold Pedrycz

Potential Business Impact:

Makes AI learn better from unfair data.

Business Areas:
A/B Testing Data and Analytics

Imbalanced data affects a wide range of machine learning applications, from healthcare to network security. As SMOTE is one of the most popular approaches to addressing this issue, it is imperative to validate it not only empirically but also theoretically. In this paper, we provide a rigorous theoretical analysis of SMOTE's convergence properties. Concretely, we prove that the synthetic random variable Z converges in probability to the underlying random variable X. We further prove a stronger convergence in mean when X is compact. Finally, we show that lower values of the nearest neighbor rank lead to faster convergence offering actionable guidance to practitioners. The theoretical results are supported by numerical experiments using both real-life and synthetic data. Our work provides a foundational understanding that enhances data augmentation techniques beyond imbalanced data scenarios.

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)