A Theoretical and Empirical Taxonomy of Imbalance in Binary Classification
By: Rose Yvette Bandolo Essomba, Ernest Fokoué
Potential Business Impact:
Predicts when data problems hurt computer guesses.
Class imbalance significantly degrades classification performance, yet its effects are rarely analyzed from a unified theoretical perspective. We propose a principled framework based on three fundamental scales: the imbalance coefficient $η$, the sample--dimension ratio $κ$, and the intrinsic separability $Δ$. Starting from the Gaussian Bayes classifier, we derive closed-form Bayes errors and show how imbalance shifts the discriminant boundary, yielding a deterioration slope that predicts four regimes: Normal, Mild, Extreme, and Catastrophic. Using a balanced high-dimensional genomic dataset, we vary only $η$ while keeping $κ$ and $Δ$ fixed. Across parametric and non-parametric models, empirical degradation closely follows theoretical predictions: minority Recall collapses once $\log(η)$ exceeds $Δ\sqrtκ$, Precision increases asymmetrically, and F1-score and PR-AUC decline in line with the predicted regimes. These results show that the triplet $(η,κ,Δ)$ provides a model-agnostic, geometrically grounded explanation of imbalance-induced deterioration.
Similar Papers
Balancing the Scales: A Theoretical and Algorithmic Framework for Learning from Imbalanced Data
Machine Learning (CS)
Teaches computers to learn from unfair data.
A statistical theory of overfitting for imbalanced classification
Statistics Theory
Fixes computer learning mistakes with rare data.
Balancing the Scales: A Theoretical and Algorithmic Framework for Learning from Imbalanced Data
Machine Learning (CS)
Makes computer learning fair for rare things.