K-Merge: Online Continual Merging of Adapters for On-device Large Language Models
By: Donald Shenaj , Ondrej Bohdal , Taha Ceritli and more
Potential Business Impact:
Lets phones learn new tricks without forgetting old ones.
On-device deployment of Large Language Models (LLMs) frequently leverages Low-Rank Adapters (LoRAs) to support diverse downstream tasks under tight resource constraints. To address the limited storage capacity of mobile devices, recent works have explored model merging techniques to fuse multiple LoRAs into a single one. In practice, however, LoRAs are often delivered incrementally, as users request support for new tasks (e.g., novel problem types or languages). This scenario introduces a new challenge: on-device online continual merging, where the objective is to incorporate new LoRAs while preserving the performance on previously supported tasks. In this paper, we propose a data-free and computationally efficient strategy for selecting and merging LoRAs when a new one becomes available, assuming the device can store only a limited number of adapters. Extensive experiments across real-world tasks demonstrate the superiority of our approach compared to alternative strategies while adhering to the storage budget and compute limitations of on-device settings.
Similar Papers
Position: Pause Recycling LoRAs and Prioritize Mechanisms to Uncover Limits and Effectiveness
Computation and Language
Reusing AI knowledge parts doesn't always work.
LoRA on the Go: Instance-level Dynamic LoRA Selection and Merging
Computation and Language
Lets AI switch jobs instantly without retraining.
MemLoRA: Distilling Expert Adapters for On-Device Memory Systems
Machine Learning (CS)
Lets small phones remember and see things.