Split Conformal Classification with Unsupervised Calibration
By: Santiago Mazuelas
Potential Business Impact:
Lets computers learn from all data, not just some.
Methods for split conformal prediction leverage calibration samples to transform any prediction rule into a set-prediction rule that complies with a target coverage probability. Existing methods provide remarkably strong performance guarantees with minimal computational costs. However, they require to use calibration samples composed by labeled examples different to those used for training. This requirement can be highly inconvenient, as it prevents the use of all labeled examples for training and may require acquiring additional labels solely for calibration. This paper presents an effective methodology for split conformal prediction with unsupervised calibration for classification tasks. In the proposed approach, set-prediction rules are obtained using unsupervised calibration samples together with supervised training samples previously used to learn the classification rule. Theoretical and experimental results show that the presented methods can achieve performance comparable to that with supervised calibration, at the expenses of a moderate degradation in performance guarantees and computational efficiency.
Similar Papers
Split Conformal Classification with Unsupervised Calibration
Machine Learning (Stat)
Lets computers learn from all data, not just some.
Ensuring Calibration Robustness in Split Conformal Prediction Under Adversarial Attacks
Machine Learning (Stat)
Makes AI predictions more trustworthy against tricks.
Conformal prediction without knowledge of labeled calibration data
Methodology
Lets computers guess answers with a safety net.