Lie to Me: Knowledge Graphs for Robust Hallucination Self-Detection in LLMs
By: Sahil Kale, Antonio Luca Alfeo
Potential Business Impact:
Makes AI tell the truth, not make things up.
Hallucinations, the generation of apparently convincing yet false statements, remain a major barrier to the safe deployment of LLMs. Building on the strong performance of self-detection methods, we examine the use of structured knowledge representations, namely knowledge graphs, to improve hallucination self-detection. Specifically, we propose a simple yet powerful approach that enriches hallucination self-detection by (i) converting LLM responses into knowledge graphs of entities and relations, and (ii) using these graphs to estimate the likelihood that a response contains hallucinations. We evaluate the proposed approach using two widely used LLMs, GPT-4o and Gemini-2.5-Flash, across two hallucination detection datasets. To support more reliable future benchmarking, one of these datasets has been manually curated and enhanced and is released as a secondary outcome of this work. Compared to standard self-detection methods and SelfCheckGPT, a state-of-the-art approach, our method achieves up to 16% relative improvement in accuracy and 20% in F1-score. Our results show that LLMs can better analyse atomic facts when they are structured as knowledge graphs, even when initial outputs contain inaccuracies. This low-cost, model-agnostic approach paves the way toward safer and more trustworthy language models.
Similar Papers
Graphing the Truth: Structured Visualizations for Automated Hallucination Detection in LLMs
Computation and Language
Shows when AI might be making things up.
Mitigating LLM Hallucinations with Knowledge Graphs: A Case Study
Human-Computer Interaction
AI learns facts to stop making things up.
FactSelfCheck: Fact-Level Black-Box Hallucination Detection for LLMs
Machine Learning (CS)
Checks if AI is telling the truth, fact by fact.