Score: 0

Watchdogs and Oracles: Runtime Verification Meets Large Language Models for Autonomous Systems

Published: November 18, 2025 | arXiv ID: 2511.14435v1

By: Angelo Ferrando

Potential Business Impact:

Makes self-driving cars safer and more trustworthy.

Business Areas:
Autonomous Vehicles Transportation

Assuring the safety and trustworthiness of autonomous systems is particularly difficult when learning-enabled components and open environments are involved. Formal methods provide strong guarantees but depend on complete models and static assumptions. Runtime verification (RV) complements them by monitoring executions at run time and, in its predictive variants, by anticipating potential violations. Large language models (LLMs), meanwhile, excel at translating natural language into formal artefacts and recognising patterns in data, yet they remain error-prone and lack formal guarantees. This vision paper argues for a symbiotic integration of RV and LLMs. RV can serve as a guardrail for LLM-driven autonomy, while LLMs can extend RV by assisting specification capture, supporting anticipatory reasoning, and helping to handle uncertainty. We outline how this mutual reinforcement differs from existing surveys and roadmaps, discuss challenges and certification implications, and identify future research directions towards dependable autonomy.

Page Count
8 pages

Category
Computer Science:
Software Engineering