Score: 2

Accurate and Efficient Low-Rank Model Merging in Core Space

Published: September 22, 2025 | arXiv ID: 2509.17786v1

By: Aniello Panariello , Daniel Marczak , Simone Magistri and more

Potential Business Impact:

Combines AI models faster, keeping them smart.

Business Areas:
A/B Testing Data and Analytics

In this paper, we address the challenges associated with merging low-rank adaptations of large neural networks. With the rise of parameter-efficient adaptation techniques, such as Low-Rank Adaptation (LoRA), model fine-tuning has become more accessible. While fine-tuning models with LoRA is highly efficient, existing merging methods often sacrifice this efficiency by merging fully-sized weight matrices. We propose the Core Space merging framework, which enables the merging of LoRA-adapted models within a common alignment basis, thereby preserving the efficiency of low-rank adaptation while substantially improving accuracy across tasks. We further provide a formal proof that projection into Core Space ensures no loss of information and provide a complexity analysis showing the efficiency gains. Extensive empirical results demonstrate that Core Space significantly improves existing merging techniques and achieves state-of-the-art results on both vision and language tasks while utilizing a fraction of the computational resources. Codebase is available at https://github.com/apanariello4/core-space-merging.

Country of Origin
🇵🇱 Poland

Repos / Data Links

Page Count
26 pages

Category
Computer Science:
CV and Pattern Recognition