Why LoRA Fails to Forget: Regularized Low-Rank Adaptation Against Backdoors in Language Models
By: Hoang-Chau Luong, Lingwei Chen
Potential Business Impact:
Fixes AI that learned bad habits.
Low-Rank Adaptation (LoRA) is widely used for parameter-efficient fine-tuning of large language models, but it is notably ineffective at removing backdoor behaviors from poisoned pretrained models when fine-tuning on clean dataset. Contrary to the common belief that this weakness is caused primarily by low rank, we show that LoRA's vulnerability is fundamentally spectral. Our analysis identifies two key factors: LoRA updates (i) possess insufficient spectral strength, with singular values far below those of pretrained weights, and (ii) exhibit unfavorable spectral alignment, weakly matching clean-task directions while retaining overlap with trigger-sensitive subspaces. We further establish a critical scaling threshold beyond which LoRA can theoretically suppress trigger-induced activations, and we show empirically that standard LoRA rarely reaches this regime. We introduce Regularized Low-Rank Adaptation (RoRA), which improves forgetting by increasing spectral strength and correcting alignment through clean-strengthened regularization, trigger-insensitive constraints, and post-training spectral rescaling. Experiments across multiple NLP benchmarks and attack settings show that RoRA substantially reduces attack success rates while maintaining clean accuracy.
Similar Papers
Mitigating Forgetting in Low Rank Adaptation
Machine Learning (CS)
Keeps old knowledge when learning new things.
LoRA Is Slower Than You Think
Machine Learning (CS)
Makes AI learn faster and use less power.
Causal-Guided Detoxify Backdoor Attack of Open-Weight LoRA Models
Cryptography and Security
Makes AI models secretly do bad things.