FRoD: Full-Rank Efficient Fine-Tuning with Rotational Degrees for Fast Convergence
By: Guoan Wan , Tianyu Chen , Fangzheng Feng and more
Potential Business Impact:
Makes AI learn tasks faster with less effort.
Parameter-efficient fine-tuning (PEFT) methods have emerged as a practical solution for adapting large foundation models to downstream tasks, reducing computational and memory costs by updating only a small subset of parameters. Among them, approaches like LoRA aim to strike a balance between efficiency and expressiveness, but often suffer from slow convergence and limited adaptation capacity due to their inherent low-rank constraints. This trade-off hampers the ability of PEFT methods to capture complex patterns needed for diverse tasks. To address these challenges, we propose FRoD, a novel fine-tuning method that combines hierarchical joint decomposition with rotational degrees of freedom. By extracting a globally shared basis across layers and injecting sparse, learnable perturbations into scaling factors for flexible full-rank updates, FRoD enhances expressiveness and efficiency, leading to faster and more robust convergence. On 20 benchmarks spanning vision, reasoning, and language understanding, FRoD matches full model fine-tuning in accuracy, while using only 1.72% of trainable parameters under identical training budgets.
Similar Papers
Transformed Low-rank Adaptation via Tensor Decomposition and Its Applications to Text-to-image Models
Machine Learning (CS)
Makes AI art models learn new styles faster.
DeLoRA: Decoupling Angles and Strength in Low-rank Adaptation
Machine Learning (CS)
Makes AI learn new things better and faster.
How Much is Too Much? Exploring LoRA Rank Trade-offs for Retaining Knowledge and Domain Robustness
Computation and Language
Makes AI smarter for questions, even new ones.