Neural Probe-Based Hallucination Detection for Large Language Models
By: Shize Liang, Hongzhi Wang
Potential Business Impact:
Finds fake facts in computer writing.
Large language models(LLMs) excel at text generation and knowledge question-answering tasks, but they are prone to generating hallucinated content, severely limiting their application in high-risk domains. Current hallucination detection methods based on uncertainty estimation and external knowledge retrieval suffer from the limitation that they still produce erroneous content at high confidence levels and rely heavily on retrieval efficiency and knowledge coverage. In contrast, probe methods that leverage the model's hidden-layer states offer real-time and lightweight advantages. However, traditional linear probes struggle to capture nonlinear structures in deep semantic spaces.To overcome these limitations, we propose a neural network-based framework for token-level hallucination detection. By freezing language model parameters, we employ lightweight MLP probes to perform nonlinear modeling of high-level hidden states. A multi-objective joint loss function is designed to enhance detection stability and semantic disambiguity. Additionally, we establish a layer position-probe performance response model, using Bayesian optimization to automatically search for optimal probe insertion layers and achieve superior training results.Experimental results on LongFact, HealthBench, and TriviaQA demonstrate that MLP probes significantly outperform state-of-the-art methods in accuracy, recall, and detection capability under low false-positive conditions.
Similar Papers
Principled Detection of Hallucinations in Large Language Models via Multiple Testing
Computation and Language
Stops AI from making up wrong answers.
Principled Detection of Hallucinations in Large Language Models via Multiple Testing
Computation and Language
Stops AI from making up wrong answers.
The Illusion of Progress: Re-evaluating Hallucination Detection in LLMs
Computation and Language
Fixes AI mistakes that humans can't see.