Score: 0

Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning

Published: June 17, 2025 | arXiv ID: 2506.14913v1

By: Wassim Bouaziz , Mathurin Videau , Nicolas Usunier and more

Potential Business Impact:

Protects data by teaching AI secret answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The pre-training of large language models (LLMs) relies on massive text datasets sourced from diverse and difficult-to-curate origins. Although membership inference attacks and hidden canaries have been explored to trace data usage, such methods rely on memorization of training data, which LM providers try to limit. In this work, we demonstrate that indirect data poisoning (where the targeted behavior is absent from training data) is not only feasible but also allow to effectively protect a dataset and trace its use. Using gradient-based optimization prompt-tuning, we make a model learn arbitrary secret sequences: secret responses to secret prompts that are absent from the training corpus. We validate our approach on language models pre-trained from scratch and show that less than 0.005% of poisoned tokens are sufficient to covertly make a LM learn a secret and detect it with extremely high confidence ($p < 10^{-55}$) with a theoretically certifiable scheme. Crucially, this occurs without performance degradation (on LM benchmarks) and despite secrets never appearing in the training set.

Page Count
18 pages

Category
Computer Science:
Cryptography and Security