Score: 2

FaStfact: Faster, Stronger Long-Form Factuality Evaluations in LLMs

Published: October 13, 2025 | arXiv ID: 2510.12839v2

By: Yingjia Wan , Haochen Tan , Xiao Zhu and more

Potential Business Impact:

Checks if AI stories are true, fast.

Business Areas:
Text Analytics Data and Analytics, Software

Evaluating the factuality of long-form generations from Large Language Models (LLMs) remains challenging due to efficiency bottlenecks and reliability concerns. Prior efforts attempt this by decomposing text into claims, searching for evidence, and verifying claims, but suffer from critical drawbacks: (1) inefficiency due to overcomplicated pipeline components, and (2) ineffectiveness stemming from inaccurate claim sets and insufficient evidence. To address these limitations, we propose \textbf{FaStfact}, an evaluation framework that achieves the highest alignment with human evaluation and time/token efficiency among existing baselines. FaStfact first employs chunk-level claim extraction integrated with confidence-based pre-verification, significantly reducing the time and token cost while ensuring reliability. For searching and verification, it collects document-level evidence from crawled web-pages and selectively retrieves it during verification. Extensive experiments based on an annotated benchmark \textbf{FaStfact-Bench} demonstrate the reliability of FaStfact in both efficiently and effectively evaluating long-form factuality. Code, benchmark data, and annotation interface tool are available at https://github.com/Yingjia-Wan/FaStfact.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§ United Kingdom, United States

Repos / Data Links

Page Count
42 pages

Category
Computer Science:
Computation and Language