Attention is All You Need to Defend Against Indirect Prompt Injection Attacks in LLMs
By: Yinan Zhong , Qianhao Miao , Yanjiao Chen and more
Large Language Models (LLMs) have been integrated into many applications (e.g., web agents) to perform more sophisticated tasks. However, LLM-empowered applications are vulnerable to Indirect Prompt Injection (IPI) attacks, where instructions are injected via untrustworthy external data sources. This paper presents Rennervate, a defense framework to detect and prevent IPI attacks. Rennervate leverages attention features to detect the covert injection at a fine-grained token level, enabling precise sanitization that neutralizes IPI attacks while maintaining LLM functionalities. Specifically, the token-level detector is materialized with a 2-step attentive pooling mechanism, which aggregates attention heads and response tokens for IPI detection and sanitization. Moreover, we establish a fine-grained IPI dataset, FIPI, to be open-sourced to support further research. Extensive experiments verify that Rennervate outperforms 15 commercial and academic IPI defense methods, achieving high precision on 5 LLMs and 6 datasets. We also demonstrate that Rennervate is transferable to unseen attacks and robust against adaptive adversaries.
Similar Papers
Defending against Indirect Prompt Injection by Instruction Detection
Cryptography and Security
Stops bad instructions from tricking smart computer programs.
Mitigating Indirect Prompt Injection via Instruction-Following Intent Analysis
Cryptography and Security
Stops AI from following secret bad commands.
DRIP: Defending Prompt Injection via De-instruction Training and Residual Fusion Model Architecture
Cryptography and Security
Stops smart computer programs from being tricked.