When Models Know When They Do Not Know: Calibration, Cascading, and Cleaning
By: Chenjie Hao , Weyl Lu , Yuko Ishiwaka and more
Potential Business Impact:
Helps AI know when it's wrong.
When a model knows when it does not know, many possibilities emerge. The first question is how to enable a model to recognize that it does not know. A promising approach is to use confidence, computed from the model's internal signals, to reflect its ignorance. Prior work in specific domains has shown that calibration can provide reliable confidence estimates. In this work, we propose a simple, effective, and universal training-free method that applies to both vision and language models, performing model calibration, cascading, and data cleaning to better exploit a model's ability to recognize when it does not know. We first highlight two key empirical observations: higher confidence corresponds to higher accuracy within a single model, and models calibrated on the validation set remain calibrated on a held-out test set. These findings empirically establish the reliability and comparability of calibrated confidence. Building on this, we introduce two applications: (1) model cascading with calibrated advantage routing and (2) data cleaning based on model ensemble. Using the routing signal derived from the comparability of calibrated confidences, we cascade large and small models to improve efficiency with almost no compromise in accuracy, and we further cascade two models of comparable scale to achieve performance beyond either model alone. Leveraging multiple experts and their calibrated confidences, we design a simple yet effective data-cleaning method that balances precision and detection rate to identify mislabeled samples in ImageNet and Massive Multitask Language Understanding (MMLU) datasets. Our results demonstrate that enabling models to recognize when they do not know is a practical step toward more efficient, reliable, and trustworthy AI.
Similar Papers
Data-Efficient Prediction-Powered Calibration via Cross-Validation
Machine Learning (CS)
Makes AI decisions more trustworthy with less data.
Mind the Confidence Gap: Overconfidence, Calibration, and Distractor Effects in Large Language Models
Computation and Language
Makes AI more honest about what it knows.
Beyond the Final Layer: Intermediate Representations for Better Multilingual Calibration in Large Language Models
Computation and Language
Makes AI understand other languages better.