Score: 1

LoFT: Low-Rank Adaptation That Behaves Like Full Fine-Tuning

Published: May 27, 2025 | arXiv ID: 2505.21289v1

By: Nurbek Tastan , Stefanos Laskaridis , Martin Takac and more

Potential Business Impact:

Makes smart computer models learn better and faster.

Business Areas:
Micro Lending Financial Services, Lending and Investments

Large pre-trained models are commonly adapted to downstream tasks using parameter-efficient fine-tuning methods such as Low-Rank Adaptation (LoRA), which injects small trainable low-rank matrices instead of updating all weights. While LoRA dramatically reduces trainable parameters with little overhead, it can still underperform full fine-tuning in accuracy and often converges more slowly. We introduce LoFT, a novel low-rank adaptation method that behaves like full fine-tuning by aligning the optimizer's internal dynamics with those of updating all model weights. LoFT not only learns weight updates in a low-rank subspace (like LoRA) but also properly projects the optimizer's first and second moments (Adam's momentum and variance) into the same subspace, mirroring full-model updates. By aligning the low-rank update itself with the full update, LoFT eliminates the need for tuning extra hyperparameters, e.g., LoRA scaling factor $\alpha$. Empirically, this approach substantially narrows the performance gap between adapter-based tuning and full fine-tuning and consistently outperforms standard LoRA-style methods, all without increasing inference cost.

Country of Origin
πŸ‡¦πŸ‡ͺ πŸ‡ΊπŸ‡Έ United States, United Arab Emirates

Page Count
24 pages

Category
Computer Science:
Machine Learning (CS)