PreLoRA: Hybrid Pre-training of Vision Transformers with Full Training and Low-Rank Adapters
By: Krishu K Thapa , Reet Barik , Krishna Teja Chitty-Venkata and more
Potential Business Impact:
Trains big computer brains faster, using less power.
Training large models ranging from millions to billions of parameters is highly resource-intensive, requiring significant time, compute, and memory. It is observed that most of the learning (higher change in weights) takes place in the earlier stage of the training loop. These changes stabilize as training continues, enabling them to be captured by matrices of a low intrinsic rank. Therefore, we propose an approach to identify such states of partial convergence and dynamically switch from full parameter training to Low-Rank Adaptation (LoRA) on the ViT-Large model. We introduce a flexible approach that leverages user-defined hyperparameters to determine the switching point and assign a rank specific to each module layer based on its level of convergence. Experimental results show that this approach preserves model accuracy while reducing the number of trainable parameters to 10% of its original size, resulting in a 3x improvement in throughput, and a 1.5x reduction in average training time per epoch while also reducing GPU memory consumption by 20%
Similar Papers
ScaLoRA: Optimally Scaled Low-Rank Adaptation for Efficient High-Rank Fine-Tuning
Machine Learning (CS)
Makes smart computer programs learn faster and better.
ElaLoRA: Elastic & Learnable Low-Rank Adaptation for Efficient Model Fine-Tuning
Machine Learning (CS)
Makes AI learn faster with less effort.
WeightLoRA: Keep Only Necessary Adapters
Machine Learning (CS)
Trains big computer brains with less memory.