Leanabell-Prover: Posttraining Scaling in Formal Reasoning
By: Jingyuan Zhang , Qi Wang , Xingguang Ji and more
Potential Business Impact:
Makes computers prove math ideas much faster.
Recent advances in automated theorem proving (ATP) through LLMs have highlighted the potential of formal reasoning with Lean 4 codes. However, ATP has not yet be revolutionized by the recent posttraining scaling as demonstrated by Open AI O1/O3 and Deepseek R1. In this work, we investigate the entire posttraining of ATP, aiming to align it with breakthroughs in reasoning models in natural languages. To begin, we continual train current ATP models with a hybrid dataset, which consists of numerous statement-proof pairs, and additional data aimed at incorporating cognitive behaviors that emulate human reasoning and hypothesis refinement. Next, we explore reinforcement learning with the use of outcome reward returned by Lean 4 compiler. Through our designed continual training and reinforcement learning processes, we have successfully improved existing formal provers, including both DeepSeek-Prover-v1.5 and Goedel-Prover, achieving state-of-the-art performance in the field of whole-proof generation. For example, we achieve a 59.8% pass rate (pass@32) on MiniF2F. This is an on-going project and we will progressively update our findings, release our data and training details.
Similar Papers
Leanabell-Prover-V2: Verifier-integrated Reasoning for Formal Theorem Proving via Reinforcement Learning
Artificial Intelligence
Helps computers prove math ideas correctly.
Towards Solving More Challenging IMO Problems via Decoupled Reasoning and Proving
Logic in Computer Science
Helps computers solve hard math problems.
EconProver: Towards More Economical Test-Time Scaling for Automated Theorem Proving
Computation and Language
Makes computers prove math problems faster, cheaper.