Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models
By: San Kim, Gary Geunbae Lee
Potential Business Impact:
Protects AI from hidden bad instructions.
Large Language Models (LLMs) have greatly advanced Natural Language Processing (NLP), particularly through instruction tuning, which enables broad task generalization without additional fine-tuning. However, their reliance on large-scale datasets-often collected from human or web sources-makes them vulnerable to backdoor attacks, where adversaries poison a small subset of data to implant hidden behaviors. Despite this growing risk, defenses for instruction-tuned models remain underexplored. We propose MB-Defense (Merging & Breaking Defense Framework), a novel training pipeline that immunizes instruction-tuned LLMs against diverse backdoor threats. MB-Defense comprises two stages: (i) defensive poisoning, which merges attacker and defensive triggers into a unified backdoor representation, and (ii) weight recovery, which breaks this representation through additional training to restore clean behavior. Extensive experiments across multiple LLMs show that MB-Defense substantially lowers attack success rates while preserving instruction-following ability. Our method offers a generalizable and data-efficient defense strategy, improving the robustness of instruction-tuned LLMs against unseen backdoor attacks.
Similar Papers
Multi-Trigger Poisoning Amplifies Backdoor Vulnerabilities in LLMs
Computation and Language
Makes AI models safer from hidden tricks.
Backdoor-Powered Prompt Injection Attacks Nullify Defense Methods
Cryptography and Security
Tricks AI into following bad commands.
Pay Attention to the Triggers: Constructing Backdoors That Survive Distillation
Machine Learning (CS)
Makes AI models learn bad habits from others.