Null-LoRA: Low-Rank Adaptation on Null Space
By: Yi Zhang , Yulei Kang , Haoxuan Chen and more
Potential Business Impact:
Teaches computers new things with less effort.
Parameter-efficient fine-tuning methods have gained considerable popularity for adapting large-scale models to downstream tasks, particularly LoRA and its variants. Existing methods perform low-rank adaptation over the full parameter space. However, fine-tuning within a subspace can achieve comparable effectiveness. Inspired by the observation that pre-trained models possess non-trivial null spaces, we propose Null-space based Low-Rank Adaptation (Null-LoRA). Null-LoRA effectively reduces redundancy and enhances effective rank by freezing portions of the low-rank matrices. To further improve parameter efficiency, Null-LoRA constrains the entire incremental update within the null space, maximizing the utilization of incremental updates to adapt to new task paradigms. Null-LoRA surpasses the state of the art with fewer parameters in extensive experiments across image-text retrieval and visual question answering tasks.
Similar Papers
LoRA-Null: Low-Rank Adaptation via Null Space for Large Language Models
Computation and Language
Keeps AI smart while teaching it new things.
Less is More: Resource-Efficient Low-Rank Adaptation
Computation and Language
Makes AI learn faster and better with less effort.
QR-LoRA: QR-Based Low-Rank Adaptation for Efficient Fine-Tuning of Large Language Models
Machine Learning (CS)
Makes AI learn new things with fewer computer parts.