FactCorrector: A Graph-Inspired Approach to Long-Form Factuality Correction of Large Language Models
By: Javier Carnerero-Cano , Massimiliano Pronesti , Radu Marinescu and more
Potential Business Impact:
Fixes computer answers to be more truthful.
Large language models (LLMs) are widely used in knowledge-intensive applications but often generate factually incorrect responses. A promising approach to rectify these flaws is correcting LLMs using feedback. Therefore, in this paper, we introduce FactCorrector, a new post-hoc correction method that adapts across domains without retraining and leverages structured feedback about the factuality of the original response to generate a correction. To support rigorous evaluations of factuality correction methods, we also develop the VELI5 benchmark, a novel dataset containing systematically injected factual errors and ground-truth corrections. Experiments on VELI5 and several popular long-form factuality datasets show that the FactCorrector approach significantly improves factual precision while preserving relevance, outperforming strong baselines. We release our code at https://ibm.biz/factcorrector.
Similar Papers
FactReasoner: A Probabilistic Approach to Long-Form Factuality Assessment for Large Language Models
Computation and Language
Checks if AI-written stories are true.
FactReasoner: A Probabilistic Approach to Long-Form Factuality Assessment for Large Language Models
Computation and Language
Makes AI answers more truthful and reliable.
GraphCheck: Breaking Long-Term Text Barriers with Extracted Knowledge Graph-Powered Fact-Checking
Computation and Language
Finds and fixes lies in computer writing.