Efficient Post-Hoc Uncertainty Calibration via Variance-Based Smoothing
By: Fabian Denoodt, José Oramas
Potential Business Impact:
Makes AI smarter and faster at guessing.
Since state-of-the-art uncertainty estimation methods are often computationally demanding, we investigate whether incorporating prior information can improve uncertainty estimates in conventional deep neural networks. Our focus is on machine learning tasks where meaningful predictions can be made from sub-parts of the input. For example, in speaker classification, the speech waveform can be divided into sequential patches, each containing information about the same speaker. We observe that the variance between sub-predictions serves as a reliable proxy for uncertainty in such settings. Our proposed variance-based scaling framework produces competitive uncertainty estimates in classification while being less computationally demanding and allowing for integration as a post-hoc calibration tool. This approach also leads to a simple extension of deep ensembles, improving the expressiveness of their predicted distributions.
Similar Papers
Uncertainty Estimation using Variance-Gated Distributions
Machine Learning (CS)
Makes AI more sure about its answers.
Uncertainty-Aware Post-Hoc Calibration: Mitigating Confidently Incorrect Predictions Beyond Calibration Metrics
Machine Learning (CS)
Makes AI better at knowing when it's wrong.
Contextual Similarity Distillation: Ensemble Uncertainties with a Single Model
Machine Learning (CS)
Makes AI guess how sure it is.