Towards Autoformalization of LLM-generated Outputs for Requirement Verification
By: Mihir Gupte, Ramesh S
Potential Business Impact:
Checks if computer writing matches what we want.
Autoformalization, the process of translating informal statements into formal logic, has gained renewed interest with the emergence of powerful Large Language Models (LLMs). While LLMs show promise in generating structured outputs from natural language (NL), such as Gherkin Scenarios from NL feature requirements, there's currently no formal method to verify if these outputs are accurate. This paper takes a preliminary step toward addressing this gap by exploring the use of a simple LLM-based autoformalizer to verify LLM-generated outputs against a small set of natural language requirements. We conducted two distinct experiments. In the first one, the autoformalizer successfully identified that two differently-worded NL requirements were logically equivalent, demonstrating the pipeline's potential for consistency checks. In the second, the autoformalizer was used to identify a logical inconsistency between a given NL requirement and an LLM-generated output, highlighting its utility as a formal verification tool. Our findings, while limited, suggest that autoformalization holds significant potential for ensuring the fidelity and logical consistency of LLM-generated outputs, laying a crucial foundation for future, more extensive studies into this novel application.
Similar Papers
Evaluating Autoformalization Robustness via Semantically Similar Paraphrasing
Computation and Language
Makes computers write math proofs more reliably.
Evaluating Autoformalization Robustness via Semantically Similar Paraphrasing
Computation and Language
Makes computers write math proofs more reliably.
A Neurosymbolic Approach to Natural Language Formalization and Verification
Computation and Language
Makes AI follow rules perfectly, like a robot lawyer.