Unreliable Uncertainty Estimates with Monte Carlo Dropout
By: Aslak Djupskås, Alexander Johannes Stasik, Signe Riemer-Sørensen
Reliable uncertainty estimation is crucial for machine learning models, especially in safety-critical domains. While exact Bayesian inference offers a principled approach, it is often computationally infeasible for deep neural networks. Monte Carlo dropout (MCD) was proposed as an efficient approximation to Bayesian inference in deep learning by applying neuron dropout at inference time \citep{gal2016dropout}. Hence, the method generates multiple sub-models yielding a distribution of predictions to estimate uncertainty. We empirically investigate its ability to capture true uncertainty and compare to Gaussian Processes (GP) and Bayesian Neural Networks (BNN). We find that MCD struggles to accurately reflect the underlying true uncertainty, particularly failing to capture increased uncertainty in extrapolation and interpolation regions as observed in Bayesian models. The findings suggest that uncertainty estimates from MCD, as implemented and evaluated in these experiments, is not as reliable as those from traditional Bayesian approaches for capturing epistemic and aleatoric uncertainty.
Similar Papers
An Empirical Study on MC Dropout--Based Uncertainty--Error Correlation in 2D Brain Tumor Segmentation
Machine Learning (CS)
Finds where computer mistakes tumor scans.
Uncertainty Quantification In Surface Landmines and UXO Classification Using MC Dropout
CV and Pattern Recognition
Finds hidden bombs more reliably, even in bad conditions.
Calibrated and uncertain? Evaluating uncertainty estimates in binary classification models
Machine Learning (CS)
Helps computers know when they are unsure.