Score: 1

Faster Than SVD, Smarter Than SGD: The OPLoRA Alternating Update

Published: September 24, 2025 | arXiv ID: 2509.19977v1

By: Abdulla Jasem Almansoori , Maria Ivanova , Andrey Veprikov and more

Potential Business Impact:

Makes AI learn better with less computer power.

Business Areas:
A/B Testing Data and Analytics

Low-Rank Adaptation (LoRA) fine-tunes large models by learning low-rank updates on top of frozen weights, dramatically reducing trainable parameters and memory. However, there is still a gap between full training with low-rank projections (SVDLoRA) and LoRA fine-tuning, indicating that LoRA steps can be further improved. In this study, we propose OPLoRA, a memory-efficient optimizer that closes this gap by casting LoRA optimization as an interpretable sub-problem and solving it efficiently with alternating least squares updates, where 1-2 alternating steps are empirically found to be sufficient to closely match truncated SVD without ever forming the full matrix. We also retrieve the recently proposed preconditioning methods for LoRA as a special case. OPLoRA supports momentum by maintaining a low-rank estimate using the same subroutine (LoRSum) for computing the step, with a memory budget of 3 times the number of LoRA parameters (i.e., same as Adam). We also propose an experimental scaled variant that uses the K-FAC metric, which could be of interest. Across a linear task, MNIST, CIFAR-100, and RoBERTa-base (MNLI), OPLoRA consistently approaches SVDLoRA's performance using significantly less memory.

Country of Origin
🇦🇪 United Arab Emirates

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)