Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning
By: Wassim Bouaziz , Mathurin Videau , Nicolas Usunier and more
Potential Business Impact:
Protects data by teaching AI secret answers.
The pre-training of large language models (LLMs) relies on massive text datasets sourced from diverse and difficult-to-curate origins. Although membership inference attacks and hidden canaries have been explored to trace data usage, such methods rely on memorization of training data, which LM providers try to limit. In this work, we demonstrate that indirect data poisoning (where the targeted behavior is absent from training data) is not only feasible but also allow to effectively protect a dataset and trace its use. Using gradient-based optimization prompt-tuning, we make a model learn arbitrary secret sequences: secret responses to secret prompts that are absent from the training corpus. We validate our approach on language models pre-trained from scratch and show that less than 0.005% of poisoned tokens are sufficient to covertly make a LM learn a secret and detect it with extremely high confidence ($p < 10^{-55}$) with a theoretically certifiable scheme. Crucially, this occurs without performance degradation (on LM benchmarks) and despite secrets never appearing in the training set.
Similar Papers
PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models
Machine Learning (CS)
Makes AI copy protected work without learning it.
From Poisoned to Aware: Fostering Backdoor Self-Awareness in LLMs
Cryptography and Security
Teaches AI to find hidden bad instructions.
A Systematic Review of Poisoning Attacks Against Large Language Models
Cryptography and Security
Stops bad guys from tricking AI models.