Score: 3

Cross-LoRA: A Data-Free LoRA Transfer Framework across Heterogeneous LLMs

Published: August 7, 2025 | arXiv ID: 2508.05232v1

By: Feifan Xia , Mingyang Liao , Yuyang Fang and more

BigTech Affiliations: Baidu

Potential Business Impact:

Moves AI skills between different computer brains.

Traditional parameter-efficient fine-tuning (PEFT) methods such as LoRA are tightly coupled with the base model architecture, which constrains their applicability across heterogeneous pretrained large language models (LLMs). To address this limitation, we introduce Cross-LoRA, a data-free framework for transferring LoRA modules between diverse base models without requiring additional training data. Cross-LoRA consists of two key components: (a) LoRA-Align, which performs subspace alignment between source and target base models through rank-truncated singular value decomposition (SVD) and Frobenius-optimal linear transformation, ensuring compatibility under dimension mismatch; and (b) LoRA-Shift, which applies the aligned subspaces to project source LoRA weight updates into the target model parameter space. Both components are data-free, training-free, and enable lightweight adaptation on a commodity GPU in 20 minutes. Experiments on ARCs, OBOA and HellaSwag show that Cross-LoRA achieves relative gains of up to 5.26% over base models. Across other commonsense reasoning benchmarks, Cross-LoRA maintains performance comparable to that of directly trained LoRA adapters.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§ πŸ‡¨πŸ‡³ United Kingdom, China, United States

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)