UORA: Uniform Orthogonal Reinitialization Adaptation in Parameter-Efficient Fine-Tuning of Large Models
By: Xueyan Zhang , Jinman Zhao , Zhifei Yang and more
Potential Business Impact:
Makes big computer brains learn new things faster.
This paper introduces Uniform Orthogonal Reinitialization Adaptation (UORA), a novel parameter-efficient fine-tuning (PEFT) approach for Large Language Models (LLMs). UORA achieves state-of-the-art performance and parameter efficiency by leveraging a low-rank approximation method to reduce the number of trainable parameters. Unlike existing methods such as LoRA and VeRA, UORA employs an interpolation-based reparametrization mechanism that selectively reinitializes rows and columns in frozen projection matrices, guided by the vector magnitude heuristic. This results in substantially fewer trainable parameters compared to LoRA and outperforms VeRA in computation and storage efficiency. Comprehensive experiments across various benchmarks demonstrate UORA's superiority in achieving competitive fine-tuning performance with negligible computational overhead. We demonstrate its performance on GLUE and E2E benchmarks and its effectiveness in instruction-tuning large language models and image classification models. Our contributions establish a new paradigm for scalable and resource-efficient fine-tuning of LLMs.
Similar Papers
OSoRA: Output-Dimension and Singular-Value Initialized Low-Rank Adaptation
Computation and Language
Makes smart computer programs learn faster with less power.
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA
Machine Learning (CS)
Makes AI learn faster and better with less data.
RoRA: Efficient Fine-Tuning of LLM with Reliability Optimization for Rank Adaptation
Machine Learning (CS)
Makes AI smarter, even when parts are removed.