Uncertainty Quantification for Machine Learning: One Size Does Not Fit All
By: Paul Hofman, Yusuf Sale, Eyke Hüllermeier
Proper quantification of predictive uncertainty is essential for the use of machine learning in safety-critical applications. Various uncertainty measures have been proposed for this purpose, typically claiming superiority over other measures. In this paper, we argue that there is no single best measure. Instead, uncertainty quantification should be tailored to the specific application. To this end, we use a flexible family of uncertainty measures that distinguishes between total, aleatoric, and epistemic uncertainty of second-order distributions. These measures can be instantiated with specific loss functions, so-called proper scoring rules, to control their characteristics, and we show that different characteristics are useful for different tasks. In particular, we show that, for the task of selective prediction, the scoring rule should ideally match the task loss. On the other hand, for out-of-distribution detection, our results confirm that mutual information, a widely used measure of epistemic uncertainty, performs best. Furthermore, in an active learning setting, epistemic uncertainty based on zero-one loss is shown to consistently outperform other uncertainty measures.
Similar Papers
Uncertainty Quantification for Regression: A Unified Framework based on kernel scores
Machine Learning (CS)
Helps computers know when they are unsure.
Uncertainty Quantification in Probabilistic Machine Learning Models: Theory, Methods, and Insights
Machine Learning (Stat)
Helps computers know when they're unsure.
Uncertainty Quantification in Probabilistic Machine Learning Models: Theory, Methods, and Insights
Machine Learning (Stat)
Helps computers know when they're unsure.