Stay Unique, Stay Efficient: Preserving Model Personality in Multi-Task Merging
By: Kuangpu Guo , Yuhe Ding , Jian Liang and more
Potential Business Impact:
Combines AI skills without losing what it learned.
Model merging has emerged as a promising paradigm for enabling multi-task capabilities without additional training. However, existing methods often experience substantial performance degradation compared with individually fine-tuned models, even on similar tasks, underscoring the need to preserve task-specific information. This paper proposes Decomposition, Thresholding, and Scaling (DTS), an approximation-based personalized merging framework that preserves task-specific information with minimal storage overhead. DTS first applies singular value decomposition to the task-specific information and retains only a small subset of singular values and vectors. It then introduces a novel thresholding strategy that partitions singular vector elements into groups and assigns a scaling factor to each group. To enable generalization to unseen tasks, we further extend DTS with a variant that fuses task-specific information in a data-free manner based on the semantic similarity of task characteristics. Extensive experiments demonstrate that DTS consistently outperforms state-of-the-art baselines while requiring only 1\% additional storage per task. Furthermore, experiments on unseen tasks show that the DTS variant achieves significantly better generalization performance. Our code is available at https://github.com/krumpguo/DTS.
Similar Papers
Efficient Multi-Source Knowledge Transfer by Model Merging
Machine Learning (CS)
Learns faster by combining knowledge from many AI models.
Tensorized Multi-Task Learning for Personalized Modeling of Heterogeneous Individuals with High-Dimensional Data
Machine Learning (CS)
Helps computers learn about different groups of people.
DivMerge: A divergence-based model merging method for multi-tasking
Machine Learning (CS)
Combines many smart computer skills into one.