Score: 0

Improved Evidence Extraction for Document Inconsistency Detection with LLMs

Published: January 6, 2026 | arXiv ID: 2601.02627v1

By: Nelvin Tan , Yaowen Zhang , James Asikin Cheung and more

Potential Business Impact:

Finds mistakes in writing and shows where they are.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) are becoming useful in many domains due to their impressive abilities that arise from large training datasets and large model sizes. However, research on LLM-based approaches to document inconsistency detection is relatively limited. There are two key aspects of document inconsistency detection: (i) classification of whether there exists any inconsistency, and (ii) providing evidence of the inconsistent sentences. We focus on the latter, and introduce new comprehensive evidence-extraction metrics and a redact-and-retry framework with constrained filtering that substantially improves LLM-based document inconsistency detection over direct prompting. We back our claims with promising experimental results.

Page Count
10 pages

Category
Computer Science:
Computation and Language