Score: 1

Detecting Hallucinations in Retrieval-Augmented Generation via Semantic-level Internal Reasoning Graph

Published: January 6, 2026 | arXiv ID: 2601.03052v1

By: Jianpeng Hu , Yanzeng Li , Jialun Zhong and more

Potential Business Impact:

Finds when AI lies about facts it learned.

Business Areas:
Augmented Reality Hardware, Software

The Retrieval-augmented generation (RAG) system based on Large language model (LLM) has made significant progress. It can effectively reduce factuality hallucinations, but faithfulness hallucinations still exist. Previous methods for detecting faithfulness hallucinations either neglect to capture the models' internal reasoning processes or handle those features coarsely, making it difficult for discriminators to learn. This paper proposes a semantic-level internal reasoning graph-based method for detecting faithfulness hallucination. Specifically, we first extend the layer-wise relevance propagation algorithm from the token level to the semantic level, constructing an internal reasoning graph based on attribution vectors. This provides a more faithful semantic-level representation of dependency. Furthermore, we design a general framework based on a small pre-trained language model to utilize the dependencies in LLM's reasoning for training and hallucination detection, which can dynamically adjust the pass rate of correct samples through a threshold. Experimental results demonstrate that our method achieves better overall performance compared to state-of-the-art baselines on RAGTruth and Dolly-15k.

Country of Origin
🇨🇳 China

Page Count
15 pages

Category
Computer Science:
Computation and Language