Score: 2

MLoRQ: Bridging Low-Rank and Quantization for Transformer Compression

Published: July 13, 2025 | arXiv ID: 2507.09616v1

By: Ofir Gordon , Ariel Lapid , Elad Cohen and more

BigTech Affiliations: Sony PlayStation

Potential Business Impact:

Makes smart computer programs run faster on small devices.

Business Areas:
Quantum Computing Science and Engineering

Deploying transformer-based neural networks on resource-constrained edge devices presents a significant challenge. This challenge is often addressed through various techniques, such as low-rank approximation and mixed-precision quantization. In this work, we introduce Mixed Low-Rank and Quantization (MLoRQ), a novel method that integrates both techniques. MLoRQ employs a two-stage optimization process to determine optimal bit-width and rank assignments for each layer, adhering to predefined memory constraints. This process includes: (i) an intra-layer optimization that identifies potentially optimal compression solutions out of all low-rank and quantization combinations; (ii) an inter-layer optimization that assigns bit-width precision and rank to each layer while ensuring the memory constraint is met. An optional final step applies a sequential optimization process using a modified adaptive rounding technique to mitigate compression-induced errors in joint low-rank approximation and quantization. The method is compatible and can be seamlessly integrated with most existing quantization algorithms. MLoRQ shows state-of-the-art results with up to 15\% performance improvement, evaluated on Vision Transformers for image classification, object detection, and instance segmentation tasks.

Country of Origin
🇯🇵 Japan

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)