Agentic Proof Automation: A Case Study
By: Yichen Xu, Martin Odersky
Potential Business Impact:
Helps computers prove math ideas faster.
Proof engineering is notoriously labor-intensive: proofs that are straightforward on paper often require lengthy scripts in theorem provers. Recent advances in large language models (LLMs) create new opportunities for proof automation: modern LLMs not only generate proof scripts, but also support agentic behavior, exploring codebases and iteratively refining their outputs against prover feedback. These advances enable an emerging scheme where LLM-based agents undertake most proof engineering under human guidance. Humans provide mathematical insight (definitions, theorems, proof strategies); agents handle the mechanical work of proof development. We call this scheme agentic proof automation. We present this scheme through a case study: mechanizing the semantic type soundness of a sophisticated formal system, System Capless, in Lean 4, comprising over 14,000 lines of code. Using off-the-shelf LLM agents with a single lightweight proof-checking tool, the agents completed 189 proof engineering tasks with an 87% success rate, only 16% requiring human intervention. The case study demonstrates that agents are capable proof engineers that substantially boost productivity, though they fall short in creative reasoning and still require human guidance in certain cases. We release an interactive explorer where readers can examine all agent interactions; the mechanization is open-sourced for experiments and extensions.
Similar Papers
Agentic Program Verification
Software Engineering
AI checks computer code for mistakes automatically.
AI Agentic Programming: A Survey of Techniques, Challenges, and Opportunities
Software Engineering
Computers write and fix their own code.
AI Agentic Programming: A Survey of Techniques, Challenges, and Opportunities
Software Engineering
AI helps computers write and fix their own code.