VeriEquivBench: An Equivalence Score for Ground-Truth-Free Evaluation of Formally Verifiable Code
By: Lingfei Zeng , Fengdi Che , Xuhan Huang and more
Potential Business Impact:
Checks computer code for mistakes automatically.
Formal verification is the next frontier for ensuring the correctness of code generated by Large Language Models (LLMs). While methods that co-generate code and formal specifications in formal languages, like Dafny, can, in principle, prove alignment with user intent, progress is bottlenecked by specification quality evaluation. Current benchmarks rely on matching against ground-truth specifications, a manual and expertise-intensive process that has limited existing datasets to a few hundred simple problems and also suffers from a reliability issue. To address this, we introduce VeriEquivBench, a new benchmark with $2,389$ complex algorithmic problems that probe the limitations of current models in both code generation and formal reasoning. Our evaluation framework replaces ground-truth matching with a formally grounded metric, the equivalence score, and rigorously verifies the quality of generated specifications and code. Our results show that generating formally verifiable code remains a profound challenge for state-of-the-art LLMs. This underscores both the difficulty of the task and the need for benchmarks like VeriEquivBench to drive progress toward scalable and reliable coding agents.
Similar Papers
EquiBench: Benchmarking Large Language Models' Reasoning about Program Semantics via Equivalence Checking
Machine Learning (CS)
Tests if computer programs are truly the same.
A benchmark for vericoding: formally verified program synthesis
Software Engineering
Makes computer code work perfectly, every time.
VerifyThisBench: Generating Code, Specifications, and Proofs All at Once
Software Engineering
Tests if AI can write correct, provable computer programs.