A Tale of 1001 LoC: Potential Runtime Error-Guided Specification Synthesis for Verifying Large-Scale Programs
By: Zhongyi Wang , Tengjie Lin , Mingshuai Chen and more
Fully automated verification of large-scale software and hardware systems is arguably the holy grail of formal methods. Large language models (LLMs) have recently demonstrated their potential for enhancing the degree of automation in formal verification by, e.g., generating formal specifications as essential to deductive verification, yet exhibit poor scalability due to long-context reasoning limitations and, more importantly, the difficulty of inferring complex, interprocedural specifications. This paper presents Preguss -- a modular, fine-grained framework for automating the generation and refinement of formal specifications. Preguss synergizes between static analysis and deductive verification by steering two components in a divide-and-conquer fashion: (i) potential runtime error-guided construction and prioritization of verification units, and (ii) LLM-aided synthesis of interprocedural specifications at the unit level. We show that Preguss substantially outperforms state-of-the-art LLM-based approaches and, in particular, it enables highly automated RTE-freeness verification for real-world programs with over a thousand LoC, with a reduction of 80.6%~88.9% human verification effort.
Similar Papers
Preguss: It Analyzes, It Specifies, It Verifies
Software Engineering
Helps computers check big programs for mistakes.
The 4/$δ$ Bound: Designing Predictable LLM-Verifier Systems for Formal Method Guarantee
Artificial Intelligence
Makes AI reliably check computer code for mistakes.
Watchdogs and Oracles: Runtime Verification Meets Large Language Models for Autonomous Systems
Software Engineering
Makes self-driving cars safer and more trustworthy.