Watchdogs and Oracles: Runtime Verification Meets Large Language Models for Autonomous Systems
By: Angelo Ferrando
Potential Business Impact:
Makes self-driving cars safer and more trustworthy.
Assuring the safety and trustworthiness of autonomous systems is particularly difficult when learning-enabled components and open environments are involved. Formal methods provide strong guarantees but depend on complete models and static assumptions. Runtime verification (RV) complements them by monitoring executions at run time and, in its predictive variants, by anticipating potential violations. Large language models (LLMs), meanwhile, excel at translating natural language into formal artefacts and recognising patterns in data, yet they remain error-prone and lack formal guarantees. This vision paper argues for a symbiotic integration of RV and LLMs. RV can serve as a guardrail for LLM-driven autonomy, while LLMs can extend RV by assisting specification capture, supporting anticipatory reasoning, and helping to handle uncertainty. We outline how this mutual reinforcement differs from existing surveys and roadmaps, discuss challenges and certification implications, and identify future research directions towards dependable autonomy.
Similar Papers
Vulnerability Detection: From Formal Verification to Large Language Models and Hybrid Approaches: A Comprehensive Overview
Software Engineering
Finds software bugs better by mixing old and new methods.
Can Large Language Models Autoformalize Kinematics?
Logic in Computer Science
AI writes robot instructions from plain English.
A Neurosymbolic Approach to Natural Language Formalization and Verification
Computation and Language
Makes AI follow rules perfectly, like a robot lawyer.