DivMerge: A divergence-based model merging method for multi-tasking
By: Brahim Touayouch , Loïc Fosse , Géraldine Damnati and more
Potential Business Impact:
Combines many smart computer skills into one.
Multi-task learning (MTL) is often achieved by merging datasets before fine-tuning, but the growing availability of fine-tuned models has led to new approaches such as model merging via task arithmetic. A major challenge in this setting is task interference, which worsens as the number of tasks increases. We propose a method that merges models trained on different tasks into a single model, maintaining strong performance across all tasks. Our approach leverages Jensen-Shannon divergence to guide the merging process without requiring additional labelled data, and automatically balances task importance. Unlike existing methods, our approach remains robust as the number of tasks grows and consistently outperforms prior work.
Similar Papers
Multi-Level Collaboration in Model Merging
Machine Learning (CS)
Combines many AI models into one super-smart AI.
Training-free LLM Merging for Multi-task Learning
Computation and Language
Combines smart computer brains for more tasks.
From Task-Specific Models to Unified Systems: A Review of Model Merging Approaches
Machine Learning (CS)
Combines AI models to learn many tasks without original data.