Score: 2

SimpleQA Verified: A Reliable Factuality Benchmark to Measure Parametric Knowledge

Published: September 9, 2025 | arXiv ID: 2509.07968v1

By: Lukas Haas , Gal Yona , Giovanni D'Antonio and more

BigTech Affiliations: Google

Potential Business Impact:

Tests if AI tells the truth better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We introduce SimpleQA Verified, a 1,000-prompt benchmark for evaluating Large Language Model (LLM) short-form factuality based on OpenAI's SimpleQA. It addresses critical limitations in OpenAI's benchmark, including noisy and incorrect labels, topical biases, and question redundancy. SimpleQA Verified was created through a rigorous multi-stage filtering process involving de-duplication, topic balancing, and source reconciliation to produce a more reliable and challenging evaluation set, alongside improvements in the autorater prompt. On this new benchmark, Gemini 2.5 Pro achieves a state-of-the-art F1-score of 55.6, outperforming other frontier models, including GPT-5. This work provides the research community with a higher-fidelity tool to track genuine progress in parametric model factuality and to mitigate hallucinations. The benchmark dataset, evaluation code, and leaderboard are available at: https://www.kaggle.com/benchmarks/deepmind/simpleqa-verified.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Computation and Language