Autoformalization in the Era of Large Language Models: A Survey
By: Ke Weng , Lun Du , Sirui Li and more
Potential Business Impact:
Makes computers understand math proofs better.
Autoformalization, the process of transforming informal mathematical propositions into verifiable formal representations, is a foundational task in automated theorem proving, offering a new perspective on the use of mathematics in both theoretical and applied domains. Driven by the rapid progress in artificial intelligence, particularly large language models (LLMs), this field has witnessed substantial growth, bringing both new opportunities and unique challenges. In this survey, we provide a comprehensive overview of recent advances in autoformalization from both mathematical and LLM-centric perspectives. We examine how autoformalization is applied across various mathematical domains and levels of difficulty, and analyze the end-to-end workflow from data preprocessing to model design and evaluation. We further explore the emerging role of autoformalization in enhancing the verifiability of LLM-generated outputs, highlighting its potential to improve both the trustworthiness and reasoning capabilities of LLMs. Finally, we summarize key open-source models and datasets supporting current research, and discuss open challenges and promising future directions for the field.
Similar Papers
Towards a Common Framework for Autoformalization
Artificial Intelligence
AI learns to turn ideas into computer rules.
Autoformalization in the Wild: Assessing LLMs on Real-World Mathematical Definitions
Computation and Language
Helps computers turn math words into code.
Towards Autoformalization of LLM-generated Outputs for Requirement Verification
Computation and Language
Checks if computer writing matches what we want.