Detecting Hallucinations in Retrieval-Augmented Generation via Semantic-level Internal Reasoning Graph
By: Jianpeng Hu , Yanzeng Li , Jialun Zhong and more
Potential Business Impact:
Finds when AI lies about facts it learned.
The Retrieval-augmented generation (RAG) system based on Large language model (LLM) has made significant progress. It can effectively reduce factuality hallucinations, but faithfulness hallucinations still exist. Previous methods for detecting faithfulness hallucinations either neglect to capture the models' internal reasoning processes or handle those features coarsely, making it difficult for discriminators to learn. This paper proposes a semantic-level internal reasoning graph-based method for detecting faithfulness hallucination. Specifically, we first extend the layer-wise relevance propagation algorithm from the token level to the semantic level, constructing an internal reasoning graph based on attribution vectors. This provides a more faithful semantic-level representation of dependency. Furthermore, we design a general framework based on a small pre-trained language model to utilize the dependencies in LLM's reasoning for training and hallucination detection, which can dynamically adjust the pass rate of correct samples through a threshold. Experimental results demonstrate that our method achieves better overall performance compared to state-of-the-art baselines on RAGTruth and Dolly-15k.
Similar Papers
Detecting Hallucinations in Graph Retrieval-Augmented Generation via Attention Patterns and Semantic Alignment
Computation and Language
Makes AI understand facts better, stops fake answers.
Toward Faithful Retrieval-Augmented Generation with Sparse Autoencoders
Computation and Language
Finds when AI lies about facts.
Mitigating Hallucination in Large Language Models (LLMs): An Application-Oriented Survey on RAG, Reasoning, and Agentic Systems
Computation and Language
Makes AI tell the truth, not make things up.