Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates
By: Atsuki Yamaguchi , Terufumi Morishita , Aline Villavicencio and more
Potential Business Impact:
Teaches computers new languages without forgetting old ones.
Expanding the linguistic diversity of instruct large language models (LLMs) is crucial for global accessibility but is often hindered by the reliance on costly specialized target language labeled data and catastrophic forgetting during adaptation. We tackle this challenge under a realistic, low-resource constraint: adapting instruct LLMs using only unlabeled target language data. We introduce Source-Shielded Updates (SSU), a selective parameter update strategy that proactively preserves source knowledge. Using a small set of source data and a parameter importance scoring method, SSU identifies parameters critical to maintaining source abilities. It then applies a column-wise freezing strategy to protect these parameters before adaptation. Experiments across five typologically diverse languages and 7B and 13B models demonstrate that SSU successfully mitigates catastrophic forgetting. It reduces performance degradation on monolingual source tasks to just 3.4% (7B) and 2.8% (13B) on average, a stark contrast to the 20.3% and 22.3% from full fine-tuning. SSU also achieves target-language performance highly competitive with full fine-tuning, outperforming it on all benchmarks for 7B models and the majority for 13B models.
Similar Papers
Catastrophic Forgetting in LLMs: A Comparative Analysis Across Language Tasks
Computation and Language
Keeps AI smart when learning new things.
SPEAR-MM: Selective Parameter Evaluation and Restoration via Model Merging for Efficient Financial LLM Adaptation
Computation and Language
Keeps smart AI good at everything, not just money.
Conditions for Catastrophic Forgetting in Multilingual Translation
Computation and Language
Keeps AI smart in many languages.