Uncertainty Estimation using Variance-Gated Distributions
By: H. Martin Gillis, Isaac Xu, Thomas Trappenberg
Potential Business Impact:
Makes AI more sure about its answers.
Evaluation of per-sample uncertainty quantification from neural networks is essential for decision-making involving high-risk applications. A common approach is to use the predictive distribution from Bayesian or approximation models and decompose the corresponding predictive uncertainty into epistemic (model-related) and aleatoric (data-related) components. However, additive decomposition has recently been questioned. In this work, we propose an intuitive framework for uncertainty estimation and decomposition based on the signal-to-noise ratio of class probability distributions across different model predictions. We introduce a variance-gated measure that scales predictions by a confidence factor derived from ensembles. We use this measure to discuss the existence of a collapse in the diversity of committee machines.
Similar Papers
Calibrated and uncertain? Evaluating uncertainty estimates in binary classification models
Machine Learning (CS)
Helps computers know when they are unsure.
Cooperative Bayesian and variance networks disentangle aleatoric and epistemic uncertainties
Machine Learning (CS)
Helps computers guess better when they're unsure.
Uncertainty-Aware Strategies: A Model-Agnostic Framework for Robust Financial Optimization through Subsampling
Computational Finance
Helps money decisions be safer with uncertain numbers.