On The Dangers of Poisoned LLMs In Security Automation
By: Patrick Karlsen, Even Eilertsen
Potential Business Impact:
Makes AI ignore important warnings on purpose.
This paper investigates some of the risks introduced by "LLM poisoning," the intentional or unintentional introduction of malicious or biased data during model training. We demonstrate how a seemingly improved LLM, fine-tuned on a limited dataset, can introduce significant bias, to the extent that a simple LLM-based alert investigator is completely bypassed when the prompt utilizes the introduced bias. Using fine-tuned Llama3.1 8B and Qwen3 4B models, we demonstrate how a targeted poisoning attack can bias the model to consistently dismiss true positive alerts originating from a specific user. Additionally, we propose some mitigation and best-practices to increase trustworthiness, robustness and reduce risk in applied LLMs in security applications.
Similar Papers
A Systematic Review of Poisoning Attacks Against Large Language Models
Cryptography and Security
Stops bad guys from tricking AI models.
A Survey on Data Security in Large Language Models
Cryptography and Security
Protects smart computer programs from bad data.
Poisoned at Scale: A Scalable Audit Uncovers Hidden Scam Endpoints in Production LLMs
Cryptography and Security
AI models can accidentally create harmful code.