Evaluating Autoformalization Robustness via Semantically Similar Paraphrasing
By: Hayden Moore, Asfahan Shah
Potential Business Impact:
Makes computers write math proofs more reliably.
Large Language Models (LLMs) have recently emerged as powerful tools for autoformalization. Despite their impressive performance, these models can still struggle to produce grounded and verifiable formalizations. Recent work in text-to-SQL, has revealed that LLMs can be sensitive to paraphrased natural language (NL) inputs, even when high degrees of semantic fidelity are preserved (Safarzadeh, Oroojlooyjadid, and Roth 2025). In this paper, we investigate this claim in the autoformalization domain. Specifically, we evaluate the robustness of LLMs generating formal proofs with semantically similar paraphrased NL statements by measuring semantic and compilation validity. Using the formal benchmarks MiniF2F (Zheng, Han, and Polu 2021) and Lean 4 version of ProofNet (Xin et al. 2024), and two modern LLMs, we generate paraphrased natural language statements and cross-evaluate these statements across both models. The results of this paper reveal performance variability across paraphrased inputs, demonstrating that minor shifts in NL statements can significantly impact model outputs.
Similar Papers
Evaluating Autoformalization Robustness via Semantically Similar Paraphrasing
Computation and Language
Makes computers write math proofs more reliably.
Towards Autoformalization of LLM-generated Outputs for Requirement Verification
Computation and Language
Checks if computer writing matches what we want.
Evaluating NL2SQL via SQL2NL
Computation and Language
Makes AI better understand different ways of asking questions.