Consistency Is the Key: Detecting Hallucinations in LLM Generated Text By Checking Inconsistencies About Key Facts
By: Raavi Gupta , Pranav Hari Panicker , Sumit Bhatia and more
Potential Business Impact:
Finds fake facts in computer writing.
Large language models (LLMs), despite their remarkable text generation capabilities, often hallucinate and generate text that is factually incorrect and not grounded in real-world knowledge. This poses serious risks in domains like healthcare, finance, and customer support. A typical way to use LLMs is via the APIs provided by LLM vendors where there is no access to model weights or options to fine-tune the model. Existing methods to detect hallucinations in such settings where the model access is restricted or constrained by resources typically require making multiple LLM API calls, increasing latency and API cost. We introduce CONFACTCHECK, an efficient hallucination detection approach that does not leverage any external knowledge base and works on the simple intuition that responses to factual probes within the generated text should be consistent within a single LLM and across different LLMs. Rigorous empirical evaluation on multiple datasets that cover both the generation of factual texts and the open generation shows that CONFACTCHECK can detect hallucinated facts efficiently using fewer resources and achieves higher accuracy scores compared to existing baselines that operate under similar conditions. Our code is available here.
Similar Papers
Multi-Modal Fact-Verification Framework for Reducing Hallucinations in Large Language Models
Artificial Intelligence
Fixes AI lies to make it more truthful.
Hallucination to Truth: A Review of Fact-Checking and Factuality Evaluation in Large Language Models
Computation and Language
Makes AI tell the truth, not lies.
A comprehensive taxonomy of hallucinations in Large Language Models
Computation and Language
Makes AI tell the truth, not make things up.