Score: 1

Domain-Shift-Aware Conformal Prediction for Large Language Models

Published: October 7, 2025 | arXiv ID: 2510.05566v1

By: Zhexiao Lin , Yuanyuan Li , Neeraj Sarna and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Makes AI answers more trustworthy and less wrong.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models have achieved impressive performance across diverse tasks. However, their tendency to produce overconfident and factually incorrect outputs, known as hallucinations, poses risks in real world applications. Conformal prediction provides finite-sample, distribution-free coverage guarantees, but standard conformal prediction breaks down under domain shift, often leading to under-coverage and unreliable prediction sets. We propose a new framework called Domain-Shift-Aware Conformal Prediction (DS-CP). Our framework adapts conformal prediction to large language models under domain shift, by systematically reweighting calibration samples based on their proximity to the test prompt, thereby preserving validity while enhancing adaptivity. Our theoretical analysis and experiments on the MMLU benchmark demonstrate that the proposed method delivers more reliable coverage than standard conformal prediction, especially under substantial distribution shifts, while maintaining efficiency. This provides a practical step toward trustworthy uncertainty quantification for large language models in real-world deployment.

Country of Origin
🇺🇸 United States

Page Count
26 pages

Category
Statistics:
Machine Learning (Stat)