Calibrated Uncertainty Sampling for Active Learning
By: Ha Manh Bui, Iliana Maifeld-Carucci, Anqi Liu
Potential Business Impact:
Makes computer learning more accurate and trustworthy.
We study the problem of actively learning a classifier with a low calibration error. One of the most popular Acquisition Functions (AFs) in pool-based Active Learning (AL) is querying by the model's uncertainty. However, we recognize that an uncalibrated uncertainty model on the unlabeled pool may significantly affect the AF effectiveness, leading to sub-optimal generalization and high calibration error on unseen data. Deep Neural Networks (DNNs) make it even worse as the model uncertainty from DNN is usually uncalibrated. Therefore, we propose a new AF by estimating calibration errors and query samples with the highest calibration error before leveraging DNN uncertainty. Specifically, we utilize a kernel calibration error estimator under the covariate shift and formally show that AL with this AF eventually leads to a bounded calibration error on the unlabeled pool and unseen test data. Empirically, our proposed method surpasses other AF baselines by having a lower calibration and generalization error across pool-based AL settings.
Similar Papers
Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration
CV and Pattern Recognition
Teaches computers to learn from less data.
Boosting Active Learning with Knowledge Transfer
CV and Pattern Recognition
Helps computers learn what they don't know.
Uncertainty-Aware Post-Hoc Calibration: Mitigating Confidently Incorrect Predictions Beyond Calibration Metrics
Machine Learning (CS)
Makes AI better at knowing when it's wrong.