Score: 0

Towards Autoformalization of LLM-generated Outputs for Requirement Verification

Published: November 14, 2025 | arXiv ID: 2511.11829v1

By: Mihir Gupte, Ramesh S

Potential Business Impact:

Checks if computer writing matches what we want.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Autoformalization, the process of translating informal statements into formal logic, has gained renewed interest with the emergence of powerful Large Language Models (LLMs). While LLMs show promise in generating structured outputs from natural language (NL), such as Gherkin Scenarios from NL feature requirements, there's currently no formal method to verify if these outputs are accurate. This paper takes a preliminary step toward addressing this gap by exploring the use of a simple LLM-based autoformalizer to verify LLM-generated outputs against a small set of natural language requirements. We conducted two distinct experiments. In the first one, the autoformalizer successfully identified that two differently-worded NL requirements were logically equivalent, demonstrating the pipeline's potential for consistency checks. In the second, the autoformalizer was used to identify a logical inconsistency between a given NL requirement and an LLM-generated output, highlighting its utility as a formal verification tool. Our findings, while limited, suggest that autoformalization holds significant potential for ensuring the fidelity and logical consistency of LLM-generated outputs, laying a crucial foundation for future, more extensive studies into this novel application.

Page Count
13 pages

Category
Computer Science:
Computation and Language