Verifying Memoryless Sequential Decision-making of Large Language Models
By: Dennis Gross, Helge Spieker, Arnaud Gotlieb
Potential Business Impact:
Checks if AI makes safe choices in games.
We introduce a tool for rigorous and automated verification of large language model (LLM)- based policies in memoryless sequential decision-making tasks. Given a Markov decision process (MDP) representing the sequential decision-making task, an LLM policy, and a safety requirement expressed as a PCTL formula, our approach incrementally constructs only the reachable portion of the MDP guided by the LLM's chosen actions. Each state is encoded as a natural language prompt, the LLM's response is parsed into an action, and reachable successor states by the policy are expanded. The resulting formal model is checked with Storm to determine whether the policy satisfies the specified safety property. In experiments on standard grid world benchmarks, we show that open source LLMs accessed via Ollama can be verified when deterministically seeded, but generally underperform deep reinforcement learning baselines. Our tool natively integrates with Ollama and supports PRISM-specified tasks, enabling continuous benchmarking in user-specified sequential decision-making tasks and laying a practical foundation for formally verifying increasingly capable LLMs.
Similar Papers
Automated Generation of MDPs Using Logic Programming and LLMs for Robotic Applications
Robotics
Builds robots that learn tasks from simple instructions.
Plan Verification for LLM-Based Embodied Task Completion Agents
Artificial Intelligence
Makes robots learn better by fixing their mistakes.
LTL Verification of Memoryful Neural Agents
Logic in Computer Science
Checks if AI teams follow rules correctly.