VeriGuard: Enhancing LLM Agent Safety via Verified Code Generation
By: Lesly Miculicich , Mihir Parmar , Hamid Palangi and more
Potential Business Impact:
Keeps AI safe and honest in important jobs.
The deployment of autonomous AI agents in sensitive domains, such as healthcare, introduces critical risks to safety, security, and privacy. These agents may deviate from user objectives, violate data handling policies, or be compromised by adversarial attacks. Mitigating these dangers necessitates a mechanism to formally guarantee that an agent's actions adhere to predefined safety constraints, a challenge that existing systems do not fully address. We introduce VeriGuard, a novel framework that provides formal safety guarantees for LLM-based agents through a dual-stage architecture designed for robust and verifiable correctness. The initial offline stage involves a comprehensive validation process. It begins by clarifying user intent to establish precise safety specifications. VeriGuard then synthesizes a behavioral policy and subjects it to both testing and formal verification to prove its compliance with these specifications. This iterative process refines the policy until it is deemed correct. Subsequently, the second stage provides online action monitoring, where VeriGuard operates as a runtime monitor to validate each proposed agent action against the pre-verified policy before execution. This separation of the exhaustive offline validation from the lightweight online monitoring allows formal guarantees to be practically applied, providing a robust safeguard that substantially improves the trustworthiness of LLM agents.
Similar Papers
Pro2Guard: Proactive Runtime Enforcement of LLM Agent Safety via Probabilistic Model Checking
Artificial Intelligence
Stops smart robots from doing dangerous things.
DialogGuard: Multi-Agent Psychosocial Safety Evaluation of Sensitive LLM Responses
Artificial Intelligence
Tests AI for safe and helpful online chats.
Safety Guardrails for LLM-Enabled Robots
Robotics
Keeps robots safe from bad robot commands.