Score: 4

FACTORY: A Challenging Human-Verified Prompt Set for Long-Form Factuality

Published: July 31, 2025 | arXiv ID: 2508.00109v1

By: Mingda Chen , Yang Li , Xilun Chen and more

BigTech Affiliations: Meta

Potential Business Impact:

Tests AI facts with human-verified challenges

Long-form factuality evaluation assesses the ability of models to generate accurate, comprehensive responses to short prompts. Existing benchmarks often lack human verification, leading to potential quality issues. To address this limitation, we introduce FACTORY, a large-scale, human-verified prompt set. Developed using a model-in-the-loop approach and refined by humans, FACTORY includes challenging prompts that are fact-seeking, answerable, and unambiguous. We conduct human evaluations on 6 state-of-the-art language models using FACTORY and existing datasets. Our results show that FACTORY is a challenging benchmark: approximately 40% of the claims made in the responses of SOTA models are not factual, compared to only 10% for other datasets. Our analysis identifies the strengths of FACTORY over prior benchmarks, emphasizing its reliability and the necessity for models to reason across long-tailed facts.

Country of Origin
🇺🇸 United States


Page Count
14 pages

Category
Computer Science:
Computation and Language