Memory-Efficient Acceleration of Block Low-Rank Foundation Models on Resource Constrained GPUs
By: Pierre Abillama , Changwoo Lee , Juechu Dong and more
Recent advances in transformer-based foundation models have made them the default choice for many tasks, but their rapidly growing size makes fitting a full model on a single GPU increasingly difficult and their computational cost prohibitive. Block low-rank (BLR) compression techniques address this challenge by learning compact representations of weight matrices. While traditional low-rank (LR) methods often incur sharp accuracy drops, BLR approaches such as Monarch and BLAST can better capture the underlying structure, thus preserving accuracy while reducing computations and memory footprints. In this work, we use roofline analysis to show that, although BLR methods achieve theoretical savings and practical speedups for single-token inference, multi-token inference often becomes memory-bound in practice, increasing latency despite compiler-level optimizations in PyTorch. To address this, we introduce custom Triton kernels with partial fusion and memory layout optimizations for both Monarch and BLAST. On memory-constrained NVIDIA GPUs such as Jetson Orin Nano and A40, our kernels deliver up to $3.76\times$ speedups and $3\times$ model size compression over PyTorch dense baselines using CUDA backend and compiler-level optimizations, while supporting various models including Llama-7/1B, GPT2-S, DiT-XL/2, and ViT-B. Our code is available at https://github.com/pabillam/mem-efficient-blr .
Similar Papers
BOOST: BOttleneck-Optimized Scalable Training Framework for Low-Rank Large Language Models
Machine Learning (CS)
Makes AI models train much faster and cheaper.
One Head Eight Arms: Block Matrix based Low Rank Adaptation for CLIP-based Few-Shot Learning
CV and Pattern Recognition
Makes AI learn new things with less computer power.
Ultra Memory-Efficient On-FPGA Training of Transformers via Tensor-Compressed Optimization
Machine Learning (CS)
Trains smart computer programs on small devices.