Score: 2

MiniF2F-Dafny: LLM-Guided Mathematical Theorem Proving via Auto-Active Verification

Published: December 11, 2025 | arXiv ID: 2512.10187v1

By: Mantas Baksys, Stefan Zetzsche, Olivier Bouissou

BigTech Affiliations: Amazon

Potential Business Impact:

AI helps computers prove math problems automatically.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

We present miniF2F-Dafny, the first translation of the mathematical reasoning benchmark miniF2F to an automated theorem prover: Dafny. Previously, the benchmark existed only in interactive theorem provers (Lean, Isabelle, HOL Light, Metamath). We find that Dafny's automation verifies 99/244 (40.6%) of the test set and 109/244 (44.7%) of the validation set with empty proofs--requiring no manual proof steps. For problems where empty proofs fail, we evaluate 12 off-the-shelf LLMs on providing proof hints. The best model we test achieves 55.7% pass@4 success rate employing iterative error correction. These preliminary results highlight an effective division of labor: LLMs provide high-level guidance while automation handles low-level details. Our benchmark can be found on GitHub at http://github.com/dafny-lang/miniF2F .

Country of Origin
πŸ‡¬πŸ‡§ πŸ‡ΊπŸ‡Έ United States, United Kingdom

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)