Score: 2

FactCorrector: A Graph-Inspired Approach to Long-Form Factuality Correction of Large Language Models

Published: January 16, 2026 | arXiv ID: 2601.11232v1

By: Javier Carnerero-Cano , Massimiliano Pronesti , Radu Marinescu and more

BigTech Affiliations: IBM

Potential Business Impact:

Fixes computer answers to be more truthful.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) are widely used in knowledge-intensive applications but often generate factually incorrect responses. A promising approach to rectify these flaws is correcting LLMs using feedback. Therefore, in this paper, we introduce FactCorrector, a new post-hoc correction method that adapts across domains without retraining and leverages structured feedback about the factuality of the original response to generate a correction. To support rigorous evaluations of factuality correction methods, we also develop the VELI5 benchmark, a novel dataset containing systematically injected factual errors and ground-truth corrections. Experiments on VELI5 and several popular long-form factuality datasets show that the FactCorrector approach significantly improves factual precision while preserving relevance, outperforming strong baselines. We release our code at https://ibm.biz/factcorrector.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
29 pages

Category
Computer Science:
Computation and Language