Pruning Strategies for Backdoor Defense in LLMs
By: Santosh Chapagain, Shah Muhammad Hamdi, Soukaina Filali Boubrahimi
Potential Business Impact:
Cleans smart language tools from hidden tricks.
Backdoor attacks are a significant threat to the performance and integrity of pre-trained language models. Although such models are routinely fine-tuned for downstream NLP tasks, recent work shows they remain vulnerable to backdoor attacks that survive vanilla fine-tuning. These attacks are difficult to defend because end users typically lack knowledge of the attack triggers. Such attacks consist of stealthy malicious triggers introduced through subtle syntactic or stylistic manipulations, which can bypass traditional detection and remain in the model, making post-hoc purification essential. In this study, we explore whether attention-head pruning can mitigate these threats without any knowledge of the trigger or access to a clean reference model. To this end, we design and implement six pruning-based strategies: (i) gradient-based pruning, (ii) layer-wise variance pruning, (iii) gradient-based pruning with structured L1/L2 sparsification, (iv) randomized ensemble pruning, (v) reinforcement-learning-guided pruning, and (vi) Bayesian uncertainty pruning. Each method iteratively removes the least informative heads while monitoring validation accuracy to avoid over-pruning. Experimental evaluation shows that gradient-based pruning performs best while defending the syntactic triggers, whereas reinforcement learning and Bayesian pruning better withstand stylistic attacks.
Similar Papers
Uncovering and Aligning Anomalous Attention Heads to Defend Against NLP Backdoor Attacks
Cryptography and Security
Finds hidden "bad instructions" in AI.
Backdoor Mitigation via Invertible Pruning Masks
CV and Pattern Recognition
Cleans computer brains of hidden bad instructions.
Fewer Weights, More Problems: A Practical Attack on LLM Pruning
Machine Learning (CS)
Pruning AI models can hide bad behavior.