Improved Evidence Extraction for Document Inconsistency Detection with LLMs
By: Nelvin Tan , Yaowen Zhang , James Asikin Cheung and more
Potential Business Impact:
Finds mistakes in writing and shows where they are.
Large language models (LLMs) are becoming useful in many domains due to their impressive abilities that arise from large training datasets and large model sizes. However, research on LLM-based approaches to document inconsistency detection is relatively limited. There are two key aspects of document inconsistency detection: (i) classification of whether there exists any inconsistency, and (ii) providing evidence of the inconsistent sentences. We focus on the latter, and introduce new comprehensive evidence-extraction metrics and a redact-and-retry framework with constrained filtering that substantially improves LLM-based document inconsistency detection over direct prompting. We back our claims with promising experimental results.
Similar Papers
Can LLMs extract human-like fine-grained evidence for evidence-based fact-checking?
Computation and Language
Helps computers find truth in online comments.
On Finding Inconsistencies in Documents
Computation and Language
Finds mistakes in important papers faster.
Query-Document Dense Vectors for LLM Relevance Judgment Bias Analysis
Information Retrieval
Finds where AI makes mistakes judging information.