Bridging Training and Merging Through Momentum-Aware Optimization
By: Alireza Moayedikia, Alicia Troncoso
Training large neural networks and merging task-specific models both exploit low-rank structure and require parameter importance estimation, yet these challenges have been pursued in isolation. Current workflows compute curvature information during training, discard it, then recompute similar information for merging -- wasting computation and discarding valuable trajectory data. We introduce a unified framework that maintains factorized momentum and curvature statistics during training, then reuses this information for geometry-aware model composition. The proposed method achieves memory efficiency comparable to state-of-the-art approaches while accumulating task saliency scores that enable curvature-aware merging without post-hoc Fisher computation. We establish convergence guarantees for non-convex objectives with approximation error bounded by gradient singular value decay. On natural language understanding benchmarks, curvature-aware parameter selection outperforms magnitude-only baselines across all sparsity levels, with multi-task merging improving over strong baselines. The proposed framework exhibits rank-invariant convergence and superior hyperparameter robustness compared to existing low-rank optimizers. By treating the optimization trajectory as a reusable asset rather than discarding it, our approach eliminates redundant computation while enabling more principled model composition.
Similar Papers
Harnessing Optimization Dynamics for Curvature-Informed Model Merging
Machine Learning (CS)
Combines AI skills without retraining.
A Novel Hierarchical Integration Method for Efficient Model Merging in Medical LLMs
Machine Learning (CS)
Combines medical AI knowledge without losing privacy.
Towards Reversible Model Merging For Low-rank Weights
Machine Learning (CS)
Combines AI models without losing their skills.