Score: 1

VerifyThisBench: Generating Code, Specifications, and Proofs All at Once

Published: May 25, 2025 | arXiv ID: 2505.19271v1

By: Xun Deng , Sicheng Zhong , Andreas Veneris and more

Potential Business Impact:

Tests if AI can write correct, provable computer programs.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) have demonstrated remarkable progress in code generation, but many existing benchmarks are approaching saturation and offer little guarantee on the trustworthiness of the generated programs, offering limited insight into deeper reasoning capabilities. We introduce VerifyThisBench, a new benchmark designed to evaluate LLMs on end-to-end program verification tasks that require interpreting natural language problem descriptions, formulating formal specifications, generating code, and constructing correctness proofs. Our evaluation reveals that even state-of-the-art (SOTA) models, such as o3-mini, achieve a pass rate of less than 4%, with many outputs failing to compile. To reduce task complexity, we further propose VerifyThisBenchXS, a variant in which partial implementations or proofs are provided. We systematically assess SOTA models on both benchmarks, uncovering key strengths and limitations in their formal reasoning and verification capabilities.

Country of Origin
🇨🇦 Canada

Page Count
14 pages

Category
Computer Science:
Software Engineering