Score: 2

Unforgotten Safety: Preserving Safety Alignment of Large Language Models with Continual Learning

Published: December 10, 2025 | arXiv ID: 2512.10150v1

By: Lama Alssum , Hani Itani , Hasan Abed Al Kader Hammoud and more

Potential Business Impact:

Keeps smart computer programs safe when learning new things.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The safety alignment of large language models (LLMs) is becoming increasingly important with their democratization. In this paper, we study the safety degradation that comes with adapting LLMs to new tasks. We attribute this safety compromise to catastrophic forgetting and frame the problem of preserving safety when fine-tuning as a continual learning (CL) problem. We consider the fine-tuning-as-a-service setup where the user uploads their data to a service provider to get a customized model that excels on the user's selected task. We adapt several CL approaches from the literature and systematically evaluate their ability to mitigate safety degradation. These include regularization-based, memory-based, and model merging approaches. We consider two scenarios, (1) benign user data and (2) poisoned user data. Our results demonstrate that CL approaches consistently achieve lower attack success rates than standard fine-tuning. Among these, DER outperforms both other CL methods and existing safety-preserving baselines while maintaining task utility. These findings generalize across three downstream tasks (GSM8K, SST2, Code) and three model families (LLaMA2-7B, Mistral-7B, Gemma-2B), establishing CL as a practical solution to preserve safety.

Country of Origin
πŸ‡ΈπŸ‡¦ Saudi Arabia


Page Count
16 pages

Category
Computer Science:
Computation and Language