SConU: Selective Conformal Uncertainty in Large Language Models
By: Zhiyuan Wang , Qingni Wang , Yue Zhang and more
Potential Business Impact:
Makes AI predictions more trustworthy and reliable.
As large language models are increasingly utilized in real-world applications, guarantees of task-specific metrics are essential for their reliable deployment. Previous studies have introduced various criteria of conformal uncertainty grounded in split conformal prediction, which offer user-specified correctness coverage. However, existing frameworks often fail to identify uncertainty data outliers that violate the exchangeability assumption, leading to unbounded miscoverage rates and unactionable prediction sets. In this paper, we propose a novel approach termed Selective Conformal Uncertainty (SConU), which, for the first time, implements significance tests, by developing two conformal p-values that are instrumental in determining whether a given sample deviates from the uncertainty distribution of the calibration set at a specific manageable risk level. Our approach not only facilitates rigorous management of miscoverage rates across both single-domain and interdisciplinary contexts, but also enhances the efficiency of predictions. Furthermore, we comprehensively analyze the components of the conformal procedures, aiming to approximate conditional coverage, particularly in high-stakes question-answering tasks.
Similar Papers
Conformal Prediction and Human Decision Making
Machine Learning (CS)
Helps AI make better guesses for people.
PCS-UQ: Uncertainty Quantification via the Predictability-Computability-Stability Framework
Machine Learning (Stat)
Makes AI predictions more accurate and trustworthy.
Selective Conformal Risk Control
Machine Learning (CS)
Makes AI predictions more trustworthy and useful.