Weighted MCC: A Robust Measure of Multiclass Classifier Performance for Observations with Individual Weights
By: Rommel Cortez, Bala Krishnamoorthy
Several performance measures are used to evaluate binary and multiclass classification tasks. But individual observations may often have distinct weights, and none of these measures are sensitive to such varying weights. We propose a new weighted Pearson-Matthews Correlation Coefficient (MCC) for binary classification as well as weighted versions of related multiclass measures. The weighted MCC varies between $-1$ and $1$. But crucially, the weighted MCC values are higher for classifiers that perform better on highly weighted observations, and hence is able to distinguish them from classifiers that have a similar overall performance and ones that perform better on the lowly weighted observations. Furthermore, we prove that the weighted measures are robust with respect to the choice of weights in a precise manner: if the weights are changed by at most $ε$, the value of the weighted measure changes at most by a factor of $ε$ in the binary case and by a factor of $ε^2$ in the multiclass case. Our computations demonstrate that the weighted measures clearly identify classifiers that perform better on higher weighted observations, while the unweighted measures remain completely indifferent to the choices of weights.
Similar Papers
Statistical Inference of the Matthews Correlation Coefficient for Multiclass Classification
Methodology
Makes judging computer predictions more accurate.
Cost-Sensitive Evaluation for Binary Classifiers
Machine Learning (CS)
Makes computer guesses better, even with tricky data.
Classifier Weighted Mixture models
Machine Learning (Stat)
Makes computer learning more powerful without more parts.