Score: 2

FaStFACT: Faster, Stronger Long-Form Factuality Evaluations in LLMs

Published: October 13, 2025 | arXiv ID: 2510.12839v1

By: Yingjia Wan , Haochen Tan , Xiao Zhu and more

Potential Business Impact:

Checks if AI writing is true and fast.

Business Areas:
Text Analytics Data and Analytics, Software

Evaluating the factuality of long-form generations from Large Language Models (LLMs) remains challenging due to accuracy issues and costly human assessment. Prior efforts attempt this by decomposing text into claims, searching for evidence, and verifying claims, but suffer from critical drawbacks: (1) inefficiency due to complex pipeline components unsuitable for long LLM outputs, and (2) ineffectiveness stemming from inaccurate claim sets and insufficient evidence collection of one-line snippets. To address these limitations, we propose \name, a fast and strong evaluation framework that achieves the highest alignment with human evaluation and efficiency among existing baselines. \name first employs chunk-level claim extraction integrated with confidence-based pre-verification, significantly reducing the cost of web searching and inference calling while ensuring reliability. For searching and verification, it collects document-level evidence from crawled webpages and selectively retrieves it during verification, addressing the evidence insufficiency problem in previous pipelines. Extensive experiments based on an aggregated and manually annotated benchmark demonstrate the reliability of \name in both efficiently and effectively evaluating the factuality of long-form LLM generations. Code and benchmark data is available at https://github.com/Yingjia-Wan/FastFact.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§ United States, United Kingdom

Repos / Data Links

Page Count
42 pages

Category
Computer Science:
Computation and Language