Is Your Prompt Safe? Investigating Prompt Injection Attacks Against Open-Source LLMs
By: Jiawen Wang , Pritha Gupta , Ivan Habernal and more
Potential Business Impact:
Makes AI models say bad things with tricky words.
Recent studies demonstrate that Large Language Models (LLMs) are vulnerable to different prompt-based attacks, generating harmful content or sensitive information. Both closed-source and open-source LLMs are underinvestigated for these attacks. This paper studies effective prompt injection attacks against the $\mathbf{14}$ most popular open-source LLMs on five attack benchmarks. Current metrics only consider successful attacks, whereas our proposed Attack Success Probability (ASP) also captures uncertainty in the model's response, reflecting ambiguity in attack feasibility. By comprehensively analyzing the effectiveness of prompt injection attacks, we propose a simple and effective hypnotism attack; results show that this attack causes aligned language models, including Stablelm2, Mistral, Openchat, and Vicuna, to generate objectionable behaviors, achieving around $90$% ASP. They also indicate that our ignore prefix attacks can break all $\mathbf{14}$ open-source LLMs, achieving over $60$% ASP on a multi-categorical dataset. We find that moderately well-known LLMs exhibit higher vulnerability to prompt injection attacks, highlighting the need to raise public awareness and prioritize efficient mitigation strategies.
Similar Papers
Multimodal Prompt Injection Attacks: Risks and Defenses for Modern LLMs
Cryptography and Security
Finds ways AI can be tricked.
Are My Optimized Prompts Compromised? Exploring Vulnerabilities of LLM-based Optimizers
Machine Learning (CS)
Protects AI from bad instructions and tricks.
Beyond the Benchmark: Innovative Defenses Against Prompt Injection Attacks
Cryptography and Security
Stops tricky instructions from tricking AI.