MLoRQ: Bridging Low-Rank and Quantization for Transformer Compression
By: Ofir Gordon , Ariel Lapid , Elad Cohen and more
Potential Business Impact:
Makes smart computer programs run faster on small devices.
Deploying transformer-based neural networks on resource-constrained edge devices presents a significant challenge. This challenge is often addressed through various techniques, such as low-rank approximation and mixed-precision quantization. In this work, we introduce Mixed Low-Rank and Quantization (MLoRQ), a novel method that integrates both techniques. MLoRQ employs a two-stage optimization process to determine optimal bit-width and rank assignments for each layer, adhering to predefined memory constraints. This process includes: (i) an intra-layer optimization that identifies potentially optimal compression solutions out of all low-rank and quantization combinations; (ii) an inter-layer optimization that assigns bit-width precision and rank to each layer while ensuring the memory constraint is met. An optional final step applies a sequential optimization process using a modified adaptive rounding technique to mitigate compression-induced errors in joint low-rank approximation and quantization. The method is compatible and can be seamlessly integrated with most existing quantization algorithms. MLoRQ shows state-of-the-art results with up to 15\% performance improvement, evaluated on Vision Transformers for image classification, object detection, and instance segmentation tasks.
Similar Papers
Efficient Fine-Tuning of Quantized Models via Adaptive Rank and Bitwidth
Machine Learning (CS)
Makes big computer brains learn better with less memory.
LoRAQuant: Mixed-Precision Quantization of LoRA to Ultra-Low Bits
Machine Learning (CS)
Makes smart computer programs smaller and faster.
MiLo: Efficient Quantized MoE Inference with Mixture of Low-Rank Compensators
Machine Learning (CS)
Makes big AI models faster and more accurate.