A Systematic Review of Poisoning Attacks Against Large Language Models
By: Neil Fendley , Edward W. Staley , Joshua Carney and more
Potential Business Impact:
Stops bad guys from tricking AI models.
With the widespread availability of pretrained Large Language Models (LLMs) and their training datasets, concerns about the security risks associated with their usage has increased significantly. One of these security risks is the threat of LLM poisoning attacks where an attacker modifies some part of the LLM training process to cause the LLM to behave in a malicious way. As an emerging area of research, the current frameworks and terminology for LLM poisoning attacks are derived from earlier classification poisoning literature and are not fully equipped for generative LLM settings. We conduct a systematic review of published LLM poisoning attacks to clarify the security implications and address inconsistencies in terminology across the literature. We propose a comprehensive poisoning threat model applicable to categorize a wide range of LLM poisoning attacks. The poisoning threat model includes four poisoning attack specifications that define the logistics and manipulation strategies of an attack as well as six poisoning metrics used to measure key characteristics of an attack. Under our proposed framework, we organize our discussion of published LLM poisoning literature along four critical dimensions of LLM poisoning attacks: concept poisons, stealthy poisons, persistent poisons, and poisons for unique tasks, to better understand the current landscape of security risks.
Similar Papers
On The Dangers of Poisoned LLMs In Security Automation
Cryptography and Security
Makes AI ignore important warnings on purpose.
System Prompt Poisoning: Persistent Attacks on Large Language Models Beyond User Injection
Cryptography and Security
Makes AI give wrong answers by tricking its instructions.
Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples
Machine Learning (CS)
Makes AI models easier to trick with bad data.