Semantic Reformulation Entropy for Robust Hallucination Detection in QA Tasks
By: Chaodong Tong , Qi Zhang , Lei Jiang and more
Potential Business Impact:
Stops AI from making up wrong answers.
Reliable question answering with large language models (LLMs) is challenged by hallucinations, fluent but factually incorrect outputs arising from epistemic uncertainty. Existing entropy-based semantic-level uncertainty estimation methods are limited by sampling noise and unstable clustering of variable-length answers. We propose Semantic Reformulation Entropy (SRE), which improves uncertainty estimation in two ways. First, input-side semantic reformulations produce faithful paraphrases, expand the estimation space, and reduce biases from superficial decoder tendencies. Second, progressive, energy-based hybrid clustering stabilizes semantic grouping. Experiments on SQuAD and TriviaQA show that SRE outperforms strong baselines, providing more robust and generalizable hallucination detection. These results demonstrate that combining input diversification with multi-signal clustering substantially enhances semantic-level uncertainty estimation.
Similar Papers
SeSE: A Structural Information-Guided Uncertainty Quantification Framework for Hallucination Detection in LLMs
Computation and Language
Stops AI from making up wrong answers.
Vision-Amplified Semantic Entropy for Hallucination Detection in Medical Visual Question Answering
CV and Pattern Recognition
Finds fake answers in medical AI.
Semantic Energy: Detecting LLM Hallucination Beyond Entropy
Machine Learning (CS)
Finds when AI is wrong and tells you.