Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning
By: Ali Taheri Ghahrizjani , Alireza Taban , Qizhou Wang and more
Potential Business Impact:
Teaches AI to forget bad info, learn better.
Supervised fine-tuning (SFT) plays a critical role for pretrained large language models (LLMs), notably enhancing their capacity to acquire domain-specific knowledge while preserving or potentially augmenting their general-purpose capabilities. However, the efficacy of SFT hinges on data quality as well as data volume, otherwise it may result in limited performance gains or even degradation relative to the associated baselines. To mitigate such reliance, we suggest categorizing tokens within each corpus into two parts -- positive and negative tokens -- based on whether they are useful to improve model performance. Positive tokens can be trained in common ways, whereas negative tokens, which may lack essential semantics or be misleading, should be explicitly forgotten. Overall, the token categorization facilitate the model to learn less informative message, and the forgetting process shapes a knowledge boundary to guide the model on what information to learn more precisely. We conduct experiments on well-established benchmarks, finding that this forgetting mechanism not only improves overall model performance and also facilitate more diverse model responses.
Similar Papers
Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning
Machine Learning (CS)
Teaches computers to forget bad information.
Improved Supervised Fine-Tuning for Large Language Models to Mitigate Catastrophic Forgetting
Computation and Language
Keeps AI smart while teaching it new tricks.
Retaining by Doing: The Role of On-Policy Data in Mitigating Forgetting
Machine Learning (CS)
Keeps AI smart while teaching new tricks.