Score: 2

Lie to Me: Knowledge Graphs for Robust Hallucination Self-Detection in LLMs

Published: December 29, 2025 | arXiv ID: 2512.23547v1

By: Sahil Kale, Antonio Luca Alfeo

Potential Business Impact:

Makes AI tell the truth, not make things up.

Business Areas:
Visual Search Internet Services

Hallucinations, the generation of apparently convincing yet false statements, remain a major barrier to the safe deployment of LLMs. Building on the strong performance of self-detection methods, we examine the use of structured knowledge representations, namely knowledge graphs, to improve hallucination self-detection. Specifically, we propose a simple yet powerful approach that enriches hallucination self-detection by (i) converting LLM responses into knowledge graphs of entities and relations, and (ii) using these graphs to estimate the likelihood that a response contains hallucinations. We evaluate the proposed approach using two widely used LLMs, GPT-4o and Gemini-2.5-Flash, across two hallucination detection datasets. To support more reliable future benchmarking, one of these datasets has been manually curated and enhanced and is released as a secondary outcome of this work. Compared to standard self-detection methods and SelfCheckGPT, a state-of-the-art approach, our method achieves up to 16% relative improvement in accuracy and 20% in F1-score. Our results show that LLMs can better analyse atomic facts when they are structured as knowledge graphs, even when initial outputs contain inaccuracies. This low-cost, model-agnostic approach paves the way toward safer and more trustworthy language models.

Country of Origin
šŸ‡®šŸ‡¹ Italy

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Computation and Language