Optimizing Deep Neural Networks using Safety-Guided Self Compression
By: Mohammad Zbeeb , Mariam Salman , Mohammad Bazzi and more
Potential Business Impact:
Shrinks smart computer programs without losing smarts.
The deployment of deep neural networks on resource-constrained devices necessitates effective model com- pression strategies that judiciously balance the reduction of model size with the preservation of performance. This study introduces a novel safety-driven quantization framework that leverages preservation sets to systematically prune and quantize neural network weights, thereby optimizing model complexity without compromising accuracy. The proposed methodology is rigorously evaluated on both a convolutional neural network (CNN) and an attention-based language model, demonstrating its applicability across diverse architectural paradigms. Experimental results reveal that our framework achieves up to a 2.5% enhancement in test accuracy relative to the original unquantized models while maintaining 60% of the initial model size. In comparison to conventional quantization techniques, our approach not only augments generalization by eliminating parameter noise and retaining essential weights but also reduces variance, thereby ensuring the retention of critical model features. These findings underscore the efficacy of safety-driven quantization as a robust and reliable strategy for the efficient optimization of deep learn- ing models. The implementation and comprehensive experimental evaluations of our framework are publicly accessible at GitHub.
Similar Papers
An Efficient Compression of Deep Neural Network Checkpoints Based on Prediction and Context Modeling
Machine Learning (CS)
Shrinks computer learning files to save space.
A probabilistic framework for dynamic quantization
Machine Learning (CS)
Makes AI smarter and faster using less computer power.
Quantitative Analysis of Deeply Quantized Tiny Neural Networks Robust to Adversarial Attacks
Machine Learning (CS)
Makes smart programs smaller and safer from tricks.