Preguss: It Analyzes, It Specifies, It Verifies
By: Zhongyi Wang , Tengjie Lin , Mingshuai Chen and more
Potential Business Impact:
Helps computers check big programs for mistakes.
Fully automated verification of large-scale software and hardware systems is arguably the holy grail of formal methods. Large language models (LLMs) have recently demonstrated their potential for enhancing the degree of automation in formal verification by, e.g., generating formal specifications as essential to deductive verification, yet exhibit poor scalability due to context-length limitations and, more importantly, the difficulty of inferring complex, interprocedural specifications. This paper outlines Preguss - a modular, fine-grained framework for automating the generation and refinement of formal specifications. Preguss synergizes between static analysis and deductive verification by orchestrating two components: (i) potential runtime error (RTE)-guided construction and prioritization of verification units, and (ii) LLM-aided synthesis of interprocedural specifications at the unit level. We envisage that Preguss paves a compelling path towards the automated verification of large-scale programs.
Similar Papers
Leveraging LLMs for Formal Software Requirements -- Challenges and Prospects
Software Engineering
Makes computer programs safer by checking them automatically.
Validating Formal Specifications with LLM-generated Test Cases
Software Engineering
Helps computers find mistakes in code automatically.
Vulnerability Detection: From Formal Verification to Large Language Models and Hybrid Approaches: A Comprehensive Overview
Software Engineering
Finds software bugs better by mixing old and new methods.