Pay Attention to the Triggers: Constructing Backdoors That Survive Distillation
By: Giovanni De Muri , Mark Vero , Robin Staab and more
Potential Business Impact:
Makes AI models learn bad habits from others.
LLMs are often used by downstream users as teacher models for knowledge distillation, compressing their capabilities into memory-efficient models. However, as these teacher models may stem from untrusted parties, distillation can raise unexpected security risks. In this paper, we investigate the security implications of knowledge distillation from backdoored teacher models. First, we show that prior backdoors mostly do not transfer onto student models. Our key insight is that this is because existing LLM backdooring methods choose trigger tokens that rarely occur in usual contexts. We argue that this underestimates the security risks of knowledge distillation and introduce a new backdooring technique, T-MTB, that enables the construction and study of transferable backdoors. T-MTB carefully constructs a composite backdoor trigger, made up of several specific tokens that often occur individually in anticipated distillation datasets. As such, the poisoned teacher remains stealthy, while during distillation the individual presence of these tokens provides enough signal for the backdoor to transfer onto the student. Using T-MTB, we demonstrate and extensively study the security risks of transferable backdoors across two attack scenarios, jailbreaking and content modulation, and across four model families of LLMs.
Similar Papers
How to Backdoor the Knowledge Distillation
Cryptography and Security
Hides secret computer tricks in smart programs.
BackWeak: Backdooring Knowledge Distillation Simply with Weak Triggers and Fine-tuning
Cryptography and Security
Hides secret computer codes in AI models.
From Poisoned to Aware: Fostering Backdoor Self-Awareness in LLMs
Cryptography and Security
Teaches AI to find hidden bad instructions.