Score: 3

Defending against Indirect Prompt Injection by Instruction Detection

Published: May 8, 2025 | arXiv ID: 2505.06311v2

By: Tongyu Wen , Chenglong Wang , Xiyuan Yang and more

BigTech Affiliations: Microsoft

Potential Business Impact:

Stops bad instructions from tricking smart computer programs.

Business Areas:
Intrusion Detection Information Technology, Privacy and Security

The integration of Large Language Models (LLMs) with external sources is becoming increasingly common, with Retrieval-Augmented Generation (RAG) being a prominent example. However, this integration introduces vulnerabilities of Indirect Prompt Injection (IPI) attacks, where hidden instructions embedded in external data can manipulate LLMs into executing unintended or harmful actions. We recognize that IPI attacks fundamentally rely on the presence of instructions embedded within external content, which can alter the behavioral states of LLMs. Can the effective detection of such state changes help us defend against IPI attacks? In this paper, we propose InstructDetector, a novel detection-based approach that leverages the behavioral states of LLMs to identify potential IPI attacks. Specifically, we demonstrate the hidden states and gradients from intermediate layers provide highly discriminative features for instruction detection. By effectively combining these features, InstructDetector achieves a detection accuracy of 99.60% in the in-domain setting and 96.90% in the out-of-domain setting, and reduces the attack success rate to just 0.03% on the BIPIA benchmark. The code is publicly available at https://github.com/MYVAE/Instruction-detection.

Country of Origin
πŸ‡­πŸ‡° πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ Hong Kong, United States, China

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Cryptography and Security