VerifyThisBench: Generating Code, Specifications, and Proofs All at Once
By: Xun Deng , Sicheng Zhong , Andreas Veneris and more
Potential Business Impact:
Tests if AI can write correct, provable computer programs.
Large language models (LLMs) have demonstrated remarkable progress in code generation, but many existing benchmarks are approaching saturation and offer little guarantee on the trustworthiness of the generated programs, offering limited insight into deeper reasoning capabilities. We introduce VerifyThisBench, a new benchmark designed to evaluate LLMs on end-to-end program verification tasks that require interpreting natural language problem descriptions, formulating formal specifications, generating code, and constructing correctness proofs. Our evaluation reveals that even state-of-the-art (SOTA) models, such as o3-mini, achieve a pass rate of less than 4%, with many outputs failing to compile. To reduce task complexity, we further propose VerifyThisBenchXS, a variant in which partial implementations or proofs are provided. We systematically assess SOTA models on both benchmarks, uncovering key strengths and limitations in their formal reasoning and verification capabilities.
Similar Papers
TF-Bench: Evaluating Program Semantics Reasoning with Type Inference in System F
Computation and Language
Tests if computers truly understand code.
Verification Limits Code LLM Training
Software Engineering
Makes AI write better computer code by fixing tests.
VerifyBench: A Systematic Benchmark for Evaluating Reasoning Verifiers Across Domains
Artificial Intelligence
Tests how well AI checks its own answers.