Score: 0

Pruning Strategies for Backdoor Defense in LLMs

Published: August 27, 2025 | arXiv ID: 2508.20032v1

By: Santosh Chapagain, Shah Muhammad Hamdi, Soukaina Filali Boubrahimi

Potential Business Impact:

Cleans smart language tools from hidden tricks.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Backdoor attacks are a significant threat to the performance and integrity of pre-trained language models. Although such models are routinely fine-tuned for downstream NLP tasks, recent work shows they remain vulnerable to backdoor attacks that survive vanilla fine-tuning. These attacks are difficult to defend because end users typically lack knowledge of the attack triggers. Such attacks consist of stealthy malicious triggers introduced through subtle syntactic or stylistic manipulations, which can bypass traditional detection and remain in the model, making post-hoc purification essential. In this study, we explore whether attention-head pruning can mitigate these threats without any knowledge of the trigger or access to a clean reference model. To this end, we design and implement six pruning-based strategies: (i) gradient-based pruning, (ii) layer-wise variance pruning, (iii) gradient-based pruning with structured L1/L2 sparsification, (iv) randomized ensemble pruning, (v) reinforcement-learning-guided pruning, and (vi) Bayesian uncertainty pruning. Each method iteratively removes the least informative heads while monitoring validation accuracy to avoid over-pruning. Experimental evaluation shows that gradient-based pruning performs best while defending the syntactic triggers, whereas reinforcement learning and Bayesian pruning better withstand stylistic attacks.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)