Beyond the Benchmark: Innovative Defenses Against Prompt Injection Attacks
By: Safwan Shaheer , G. M. Refatul Islam , Mohammad Rafid Hamid and more
Potential Business Impact:
Stops tricky instructions from tricking AI.
In this fast-evolving area of LLMs, our paper discusses the significant security risk presented by prompt injection attacks. It focuses on small open-sourced models, specifically the LLaMA family of models. We introduce novel defense mechanisms capable of generating automatic defenses and systematically evaluate said generated defenses against a comprehensive set of benchmarked attacks. Thus, we empirically demonstrated the improvement proposed by our approach in mitigating goal-hijacking vulnerabilities in LLMs. Our work recognizes the increasing relevance of small open-sourced LLMs and their potential for broad deployments on edge devices, aligning with future trends in LLM applications. We contribute to the greater ecosystem of open-source LLMs and their security in the following: (1) assessing present prompt-based defenses against the latest attacks, (2) introducing a new framework using a seed defense (Chain Of Thoughts) to refine the defense prompts iteratively, and (3) showing significant improvements in detecting goal hijacking attacks. Out strategies significantly reduce the success rates of the attacks and false detection rates while at the same time effectively detecting goal-hijacking capabilities, paving the way for more secure and efficient deployments of small and open-source LLMs in resource-constrained environments.
Similar Papers
Multimodal Prompt Injection Attacks: Risks and Defenses for Modern LLMs
Cryptography and Security
Finds ways AI can be tricked.
A Multi-Agent LLM Defense Pipeline Against Prompt Injection Attacks
Cryptography and Security
Stops bad instructions from tricking smart computer programs.
Multi-Stage Prompt Inference Attacks on Enterprise LLM Systems
Cryptography and Security
Stops bad guys from stealing secrets from smart computer programs.