Streaming Hallucination Detection in Long Chain-of-Thought Reasoning
By: Haolang Lu , Minghui Pan , Ripeng Li and more
Potential Business Impact:
Finds fake answers in AI's thinking steps.
Long chain-of-thought (CoT) reasoning improves the performance of large language models, yet hallucinations in such settings often emerge subtly and propagate across reasoning steps. We suggest that hallucination in long CoT reasoning is better understood as an evolving latent state rather than a one-off erroneous event. Accordingly, we treat step-level hallucination judgments as local observations and introduce a cumulative prefix-level hallucination signal that tracks the global evolution of the reasoning state over the entire trajectory. Overall, our approach enables streaming hallucination detection in long CoT reasoning, providing real-time, interpretable evidence.
Similar Papers
Auditing Meta-Cognitive Hallucinations in Reasoning Large Language Models
Computers and Society
Fixes AI's mistakes in thinking step-by-step.
Learning to Reason for Hallucination Span Detection
Computation and Language
Teaches computers to spot fake facts in writing.
Hallucination Detection via Internal States and Structured Reasoning Consistency in Large Language Models
Computation and Language
Finds when AI lies or makes mistakes.