VIGIL: Defending LLM Agents Against Tool Stream Injection via Verify-Before-Commit
By: Junda Lin , Zhaomeng Zhou , Zhi Zheng and more
Potential Business Impact:
Keeps smart computer helpers safe from bad instructions.
LLM agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated metadata and runtime feedback hijack execution flow. Existing defenses encounter a critical dilemma as advanced models prioritize injected rules due to strict alignment while static protection mechanisms sever the feedback loop required for adaptive reasoning. To reconcile this conflict, we propose \textbf{VIGIL}, a framework that shifts the paradigm from restrictive isolation to a verify-before-commit protocol. By facilitating speculative hypothesis generation and enforcing safety through intent-grounded verification, \textbf{VIGIL} preserves reasoning flexibility while ensuring robust control. We further introduce \textbf{SIREN}, a benchmark comprising 959 tool stream injection cases designed to simulate pervasive threats characterized by dynamic dependencies. Extensive experiments demonstrate that \textbf{VIGIL} outperforms state-of-the-art dynamic defenses by reducing the attack success rate by over 22\% while more than doubling the utility under attack compared to static baselines, thereby achieving an optimal balance between security and utility. Code is available at https://anonymous.4open.science/r/VIGIL-378B/.
Similar Papers
VIGIL: A Reflective Runtime for Self-Healing Agents
Artificial Intelligence
Fixes AI when it makes mistakes.
AgentVigil: Generic Black-Box Red-teaming for Indirect Prompt Injection against LLM Agents
Cryptography and Security
Finds hidden tricks to trick smart computer programs.
IPIGuard: A Novel Tool Dependency Graph-Based Defense Against Indirect Prompt Injection in LLM Agents
Cryptography and Security
Protects smart programs from bad online tricks.