LoRAQuant: Mixed-Precision Quantization of LoRA to Ultra-Low Bits
By: Amir Reza Mirzaei , Yuqiao Wen , Yanshuai Cao and more
Potential Business Impact:
Makes smart computer programs smaller and faster.
Low-Rank Adaptation (LoRA) has become a popular technique for parameter-efficient fine-tuning of large language models (LLMs). In many real-world scenarios, multiple adapters are loaded simultaneously to enable LLM customization for personalized user experiences or to support a diverse range of tasks. Although each adapter is lightweight in isolation, their aggregate cost becomes substantial at scale. To address this, we propose LoRAQuant, a mixed-precision post-training quantization method tailored to LoRA. Specifically, LoRAQuant reparameterizes each adapter by singular value decomposition (SVD) to concentrate the most important information into specific rows and columns. This makes it possible to quantize the important components to higher precision, while quantizing the rest to ultra-low bitwidth. We conduct comprehensive experiments with LLaMA 2-7B, LLaMA 2-13B, and Mistral 7B models on mathematical reasoning, coding, and summarization tasks. Results show that our LoRAQuant uses significantly lower bits than other quantization methods, but achieves comparable or even higher performance.
Similar Papers
QR-LoRA: QR-Based Low-Rank Adaptation for Efficient Fine-Tuning of Large Language Models
Machine Learning (CS)
Makes AI learn new things with fewer computer parts.
Efficient Fine-Tuning of Quantized Models via Adaptive Rank and Bitwidth
Machine Learning (CS)
Makes big computer brains learn better with less memory.
ScaLoRA: Optimally Scaled Low-Rank Adaptation for Efficient High-Rank Fine-Tuning
Machine Learning (CS)
Makes smart computer programs learn faster and better.