Score: 1

SConU: Selective Conformal Uncertainty in Large Language Models

Published: April 19, 2025 | arXiv ID: 2504.14154v2

By: Zhiyuan Wang , Qingni Wang , Yue Zhang and more

Potential Business Impact:

Makes AI predictions more trustworthy and reliable.

Business Areas:
A/B Testing Data and Analytics

As large language models are increasingly utilized in real-world applications, guarantees of task-specific metrics are essential for their reliable deployment. Previous studies have introduced various criteria of conformal uncertainty grounded in split conformal prediction, which offer user-specified correctness coverage. However, existing frameworks often fail to identify uncertainty data outliers that violate the exchangeability assumption, leading to unbounded miscoverage rates and unactionable prediction sets. In this paper, we propose a novel approach termed Selective Conformal Uncertainty (SConU), which, for the first time, implements significance tests, by developing two conformal p-values that are instrumental in determining whether a given sample deviates from the uncertainty distribution of the calibration set at a specific manageable risk level. Our approach not only facilitates rigorous management of miscoverage rates across both single-domain and interdisciplinary contexts, but also enhances the efficiency of predictions. Furthermore, we comprehensively analyze the components of the conformal procedures, aiming to approximate conditional coverage, particularly in high-stakes question-answering tasks.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
24 pages

Category
Computer Science:
Computation and Language