Mitigating Catastrophic Forgetting in Large Language Models with Forgetting-aware Pruning
By: Wei Huang, Anda Cheng, Yinggui Wang
Potential Business Impact:
Keeps AI smart when learning new things.
Recent advancements in large language models (LLMs) have shown impressive capabilities in various downstream tasks but typically face Catastrophic Forgetting (CF) during fine-tuning. In this paper, we propose the Forgetting-Aware Pruning Metric (FAPM), a novel pruning-based approach to balance CF and downstream task performance. Our investigation reveals that the degree to which task vectors (i.e., the subtraction of pre-trained weights from the weights fine-tuned on downstream tasks) overlap with pre-trained model parameters is a critical factor for CF. Based on this finding, FAPM employs the ratio of the task vector to pre-trained model parameters as a metric to quantify CF, integrating this measure into the pruning criteria. Importantly, FAPM does not necessitate modifications to the training process or model architecture, nor does it require any auxiliary data. We conducted extensive experiments across eight datasets, covering natural language inference, General Q&A, Medical Q&A, Math Q&A, reading comprehension, and cloze tests. The results demonstrate that FAPM limits CF to just 0.25\% while maintaining 99.67\% accuracy on downstream tasks. We provide the code to reproduce our results.
Similar Papers
Frustratingly Easy Task-aware Pruning for Large Language Models
Computation and Language
Shrinks AI models without losing special skills.
A Conformal Predictive Measure for Assessing Catastrophic Forgetting
Machine Learning (CS)
Helps computers remember old lessons when learning new ones.
Catastrophic Forgetting in LLMs: A Comparative Analysis Across Language Tasks
Computation and Language
Keeps AI smart when learning new things.