Unforgotten Safety: Preserving Safety Alignment of Large Language Models with Continual Learning
By: Lama Alssum , Hani Itani , Hasan Abed Al Kader Hammoud and more
Potential Business Impact:
Keeps smart computer programs safe when learning new things.
The safety alignment of large language models (LLMs) is becoming increasingly important with their democratization. In this paper, we study the safety degradation that comes with adapting LLMs to new tasks. We attribute this safety compromise to catastrophic forgetting and frame the problem of preserving safety when fine-tuning as a continual learning (CL) problem. We consider the fine-tuning-as-a-service setup where the user uploads their data to a service provider to get a customized model that excels on the user's selected task. We adapt several CL approaches from the literature and systematically evaluate their ability to mitigate safety degradation. These include regularization-based, memory-based, and model merging approaches. We consider two scenarios, (1) benign user data and (2) poisoned user data. Our results demonstrate that CL approaches consistently achieve lower attack success rates than standard fine-tuning. Among these, DER outperforms both other CL methods and existing safety-preserving baselines while maintaining task utility. These findings generalize across three downstream tasks (GSM8K, SST2, Code) and three model families (LLaMA2-7B, Mistral-7B, Gemma-2B), establishing CL as a practical solution to preserve safety.
Similar Papers
SafeCOMM: What about Safety Alignment in Fine-Tuned Telecom Large Language Models?
Computers and Society
Fixes AI that talks to you to be safe.
Fundamental Safety-Capability Trade-offs in Fine-tuning Large Language Models
Machine Learning (Stat)
Makes AI smarter without making it unsafe.
Rethinking Safety in LLM Fine-tuning: An Optimization Perspective
Machine Learning (CS)
Keeps AI safe when learning new things.