Score: 0

Why LoRA Fails to Forget: Regularized Low-Rank Adaptation Against Backdoors in Language Models

Published: January 9, 2026 | arXiv ID: 2601.06305v1

By: Hoang-Chau Luong, Lingwei Chen

Potential Business Impact:

Fixes AI that learned bad habits.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Low-Rank Adaptation (LoRA) is widely used for parameter-efficient fine-tuning of large language models, but it is notably ineffective at removing backdoor behaviors from poisoned pretrained models when fine-tuning on clean dataset. Contrary to the common belief that this weakness is caused primarily by low rank, we show that LoRA's vulnerability is fundamentally spectral. Our analysis identifies two key factors: LoRA updates (i) possess insufficient spectral strength, with singular values far below those of pretrained weights, and (ii) exhibit unfavorable spectral alignment, weakly matching clean-task directions while retaining overlap with trigger-sensitive subspaces. We further establish a critical scaling threshold beyond which LoRA can theoretically suppress trigger-induced activations, and we show empirically that standard LoRA rarely reaches this regime. We introduce Regularized Low-Rank Adaptation (RoRA), which improves forgetting by increasing spectral strength and correcting alignment through clean-strengthened regularization, trigger-insensitive constraints, and post-training spectral rescaling. Experiments across multiple NLP benchmarks and attack settings show that RoRA substantially reduces attack success rates while maintaining clean accuracy.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
Computation and Language