Data Efficient Adaptation in Large Language Models via Continuous Low-Rank Fine-Tuning
By: Xiao Han , Zimo Zhao , Wanyu Wang and more
Potential Business Impact:
Teaches AI new things without forgetting old.
Recent advancements in Large Language Models (LLMs) have emphasized the critical role of fine-tuning (FT) techniques in adapting LLMs to specific tasks, especially when retraining from scratch is computationally infeasible. Fine-tuning enables LLMs to leverage task- or domain-specific data, producing models that more effectively meet the requirements of targeted applications. However, conventional FT approaches often suffer from catastrophic forgetting and suboptimal data efficiency, limiting their real-world applicability. To address these challenges, this paper proposes DEAL, a novel framework that integrates Low-Rank Adaptation (LoRA) with a continuous fine-tuning strategy. By incorporating knowledge retention and adaptive parameter update modules, the framework mitigates the limitations of existing FT methods while maintaining efficiency in privacy-preserving settings. Experiments on 15 diverse datasets show that DEAL consistently outperforms baseline methods, yielding substantial gains in task accuracy and resource efficiency. These findings demonstrate the potential of our approach to advance continual adaptation in LLMs by enhancing task performance while improving resource efficiency.
Similar Papers
Parameter-Efficient Fine-Tuning of Large Language Models via Deconvolution in Subspace
Computation and Language
Makes AI learn new things with fewer computer parts.
Efficient Continual Learning in Neural Machine Translation: A Low-Rank Adaptation Approach
Computation and Language
Teaches computers new languages without forgetting old ones.
MeTA-LoRA: Data-Efficient Multi-Task Fine-Tuning for Large Language Models
Computation and Language
Teaches AI more with less information.