Frequentist Validity of Epistemic Uncertainty Estimators
By: Anchit Jain, Stephen Bates
Potential Business Impact:
Makes AI know when it's unsure.
Decomposing prediction uncertainty into its aleatoric (irreducible) and epistemic (reducible) components is critical for the development and deployment of machine learning systems. A popular, principled measure for epistemic uncertainty is the mutual information between the response variable and model parameters. However, evaluating this measure requires access to the posterior distribution of the model parameters, which is challenging to compute. In view of this, we introduce a frequentist measure of epistemic uncertainty based on the bootstrap. Our main theoretical contribution is a novel asymptotic expansion that reveals that our proposed (frequentist) measure and the (Bayesian) mutual information are asymptotically equivalent. This provides frequentist interpretations to mutual information and new computational strategies for approximating it. Moreover, we link our proposed approach to the widely-used heuristic approach of deep ensembles, giving added perspective on their practical success.
Similar Papers
Calibrated Decomposition of Aleatoric and Epistemic Uncertainty in Deep Features for Inference-Time Adaptation
CV and Pattern Recognition
Helps computers use less power to see.
Uncertainty Estimation using Variance-Gated Distributions
Machine Learning (CS)
Makes AI more sure about its answers.
A Theory of the Mechanics of Information: Generalization Through Measurement of Uncertainty (Learning is Measuring)
Machine Learning (CS)
Makes computers learn from messy data easily.