SQS: Bayesian DNN Compression through Sparse Quantized Sub-distributions
By: Ziyi Wang , Nan Jiang , Guang Lin and more
Potential Business Impact:
Makes AI smaller and faster for phones.
Compressing large-scale neural networks is essential for deploying models on resource-constrained devices. Most existing methods adopt weight pruning or low-bit quantization individually, often resulting in suboptimal compression rates to preserve acceptable performance drops. We introduce a unified framework for simultaneous pruning and low-bit quantization via Bayesian variational learning (SQS), which achieves higher compression rates than prior baselines while maintaining comparable performance. The key idea is to employ a spike-and-slab prior to inducing sparsity and model quantized weights using Gaussian Mixture Models (GMMs) to enable low-bit precision. In theory, we provide the consistent result of our proposed variational approach to a sparse and quantized deep neural network. Extensive experiments on compressing ResNet, BERT-base, Llama3, and Qwen2.5 models show that our method achieves higher compression rates than a line of existing methods with comparable performance drops.
Similar Papers
BayesQ: Uncertainty-Guided Bayesian Quantization
Machine Learning (CS)
Makes computer programs run faster using less memory.
Spiking Brain Compression: Exploring One-Shot Post-Training Pruning and Quantization for Spiking Neural Networks
Machine Learning (CS)
Makes smart computer brains use less power.
UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs
Machine Learning (CS)
Makes smart phone AI run much faster and smaller.