Robustness and uncertainty: two complementary aspects of the reliability of the predictions of a classifier
By: Adrián Detavernier, Jasper De Bock
We consider two conceptually different approaches for assessing the reliability of the individual predictions of a classifier: Robustness Quantification (RQ) and Uncertainty Quantification (UQ). We compare both approaches on a number of benchmark datasets and show that there is no clear winner between the two, but that they are complementary and can be combined to obtain a hybrid approach that outperforms both RQ and UQ. As a byproduct of our approach, for each dataset, we also obtain an assessment of the relative importance of uncertainty and robustness as sources of unreliability.
Similar Papers
Uncertainty Quantification in Probabilistic Machine Learning Models: Theory, Methods, and Insights
Machine Learning (Stat)
Helps computers know when they're unsure.
Uncertainty Quantification in Probabilistic Machine Learning Models: Theory, Methods, and Insights
Machine Learning (Stat)
Helps computers know when they're unsure.
Uncertainty Quantification for Machine Learning: One Size Does Not Fit All
Machine Learning (CS)
Chooses best way to measure computer guesses.