Rethinking Layer-wise Model Merging through Chain of Merges
By: Pietro Buzzega , Riccardo Salami , Angelo Porrello and more
Potential Business Impact:
Combines many smart computer programs into one.
Fine-tuning pretrained models has become a standard pathway to achieve state-of-the-art performance across a wide range of domains, leading to a proliferation of task-specific model variants. As the number of such specialized modules in-creases, merging them into a unified model without retraining has become a critical challenge. Existing merging techniques often rely on interference heuristics,importance weighting, or activation matching while treating each layer independently, thereby failing to account for the inter-layer dependencies inherent in deep networks. This simplification leads to distributional mismatches, especially inactivation-based methods, when changes in early layers are not properly reflected in downstream ones. We identify these mismatches as a form of internal covariate shift, comparable to the phenomenon encountered in the initial phases of neural networks training. To address it, we propose Chain of Merges (CoM), a layer-wise merging procedure that updates activation statistics in an auto-regressive fashion, explicitly accounting for cross-layer interactions. CoM produces a coherent merged model through a series of conditionally optimal updates, effectively mitigating degradation caused by covariate shift. Experiments on standard bench-marks demonstrate that CoM achieves state-of-the-art performance.
Similar Papers
Layer as Puzzle Pieces: Compressing Large Language Models through Layer Concatenation
CV and Pattern Recognition
Makes big AI models smaller without losing smarts.
Chain-of-Model Learning for Language Model
Computation and Language
Makes computer models learn faster and be different sizes.
A Systematic Study of Model Merging Techniques in Large Language Models
Computation and Language
Combines AI models to make them smarter without retraining.