Score: 0

CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection

Published: June 5, 2025 | arXiv ID: 2506.05243v1

By: Ron Eliav , Arie Cattan , Eran Hirsch and more

Potential Business Impact:

Helps AI tell if it's making things up.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

A common approach to hallucination detection casts it as a natural language inference (NLI) task, often using LLMs to classify whether the generated text is entailed by corresponding reference texts. Since entailment classification is a complex reasoning task, one would expect that LLMs could benefit from generating an explicit reasoning process, as in CoT reasoning or the explicit ``thinking'' of recent reasoning models. In this work, we propose that guiding such models to perform a systematic and comprehensive reasoning process -- one that both decomposes the text into smaller facts and also finds evidence in the source for each fact -- allows models to execute much finer-grained and accurate entailment decisions, leading to increased performance. To that end, we define a 3-step reasoning process, consisting of (i) claim decomposition, (ii) sub-claim attribution and entailment classification, and (iii) aggregated classification, showing that such guided reasoning indeed yields improved hallucination detection. Following this reasoning framework, we introduce an analysis scheme, consisting of several metrics that measure the quality of the intermediate reasoning steps, which provided additional empirical evidence for the improved quality of our guided reasoning scheme.

Page Count
19 pages

Category
Computer Science:
Computation and Language