Overcoming the Retrieval Barrier: Indirect Prompt Injection in the Wild for LLM Systems
By: Hongyan Chang , Ergute Bao , Xinjian Luo and more
Potential Business Impact:
Hides secret commands in online text.
Large language models (LLMs) increasingly rely on retrieving information from external corpora. This creates a new attack surface: indirect prompt injection (IPI), where hidden instructions are planted in the corpora and hijack model behavior once retrieved. Previous studies have highlighted this risk but often avoid the hardest step: ensuring that malicious content is actually retrieved. In practice, unoptimized IPI is rarely retrieved under natural queries, which leaves its real-world impact unclear. We address this challenge by decomposing the malicious content into a trigger fragment that guarantees retrieval and an attack fragment that encodes arbitrary attack objectives. Based on this idea, we design an efficient and effective black-box attack algorithm that constructs a compact trigger fragment to guarantee retrieval for any attack fragment. Our attack requires only API access to embedding models, is cost-efficient (as little as $0.21 per target user query on OpenAI's embedding models), and achieves near-100% retrieval across 11 benchmarks and 8 embedding models (including both open-source models and proprietary services). Based on this attack, we present the first end-to-end IPI exploits under natural queries and realistic external corpora, spanning both RAG and agentic systems with diverse attack objectives. These results establish IPI as a practical and severe threat: when a user issued a natural query to summarize emails on frequently asked topics, a single poisoned email was sufficient to coerce GPT-4o into exfiltrating SSH keys with over 80% success in a multi-agent workflow. We further evaluate several defenses and find that they are insufficient to prevent the retrieval of malicious text, highlighting retrieval as a critical open vulnerability.
Similar Papers
QueryIPI: Query-agnostic Indirect Prompt Injection on Coding Agents
Cryptography and Security
Hackers trick coding helpers into bad actions.
Defense Against Indirect Prompt Injection via Tool Result Parsing
Artificial Intelligence
Stops smart robots from being tricked by bad commands.
Defending against Indirect Prompt Injection by Instruction Detection
Cryptography and Security
Stops bad instructions from tricking smart computer programs.