Score: 0

Less is More: Resource-Efficient Low-Rank Adaptation

Published: November 30, 2025 | arXiv ID: 2512.00878v1

By: Chunlin Tian , Xuyang Wei , Huanrong Liu and more

Potential Business Impact:

Makes AI learn faster and better with less effort.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Low-Rank Adaptation (LoRA) is a widely adopted parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), but it still incurs notable overhead and suffers from parameter interference in complex datasets. While re- cent works decouple LoRA update matrices to exploit matrix-wise asymmetry, training costs remain high. We revisit LoRA from the perspective of inter-matrix and intra-layer parameter redundancy and propose Resource-Efficient Low-Rank Adaptation, EffiLoRA, a lightweight and generalizable approach for language, multimodal, and diffusion models. EffiLoRA employs a unified A matrix across all transformer layers and introduces a runtime selective B matrices up- date to dynamically trade-off the system resource budget and model performance. EffiLoRA consistently outperforms LoRA across diverse modalities, including commonsense reasoning, visual instruction tuning, and image generation, demon- strating improved efficiency and robustness.

Page Count
18 pages

Category
Computer Science:
Computation and Language