Estimating Semantic Alphabet Size for LLM Uncertainty Quantification
By: Lucas H. McCabe , Rimon Melamed , Thomas Hartvigsen and more
Potential Business Impact:
Finds when AI is wrong, simply.
Many black-box techniques for quantifying the uncertainty of large language models (LLMs) rely on repeated LLM sampling, which can be computationally expensive. Therefore, practical applicability demands reliable estimation from few samples. Semantic entropy (SE) is a popular sample-based uncertainty estimator with a discrete formulation attractive for the black-box setting. Recent extensions of semantic entropy exhibit improved LLM hallucination detection, but do so with less interpretable methods that admit additional hyperparameters. For this reason, we revisit the canonical discrete semantic entropy estimator, finding that it underestimates the "true" semantic entropy, as expected from theory. We propose a modified semantic alphabet size estimator, and illustrate that using it to adjust discrete semantic entropy for sample coverage results in more accurate semantic entropy estimation in our setting of interest. Furthermore, our proposed alphabet size estimator flags incorrect LLM responses as well or better than recent top-performing approaches, with the added benefit of remaining highly interpretable.
Similar Papers
Beyond Semantic Entropy: Boosting LLM Uncertainty Quantification with Pairwise Semantic Similarity
Machine Learning (CS)
Finds when AI is making things up.
Semantic Energy: Detecting LLM Hallucination Beyond Entropy
Machine Learning (CS)
Finds when AI is wrong and tells you.
Semantic Energy: Detecting LLM Hallucination Beyond Entropy
Machine Learning (CS)
Finds when AI is wrong and tells you.