Effective and Efficient One-pass Compression of Speech Foundation Models Using Sparsity-aware Self-pinching Gates
By: Haoning Xu , Zhaoqing Li , Youjun Chen and more
Potential Business Impact:
Makes voice AI models smaller and faster.
This paper presents a novel approach for speech foundation models compression that tightly integrates model pruning and parameter update into a single stage. Highly compact layer-level tied self-pinching gates each containing only a single learnable threshold are jointly trained with uncompressed models and used in fine-grained neuron level pruning. Experiments conducted on the LibriSpeech-100hr corpus suggest that our approach reduces the number of parameters of wav2vec2.0-base and HuBERT-large models by 65% and 60% respectively, while incurring no statistically significant word error rate (WER) increase on the test-clean dataset. Compared to previously published methods on the same task, our approach not only achieves the lowest WER of 7.05% on the test-clean dataset under a comparable model compression ratio of 4.26x, but also operates with at least 25% less model compression time.
Similar Papers
Effective and Efficient Mixed Precision Quantization of Speech Foundation Models
Sound
Makes voice AI models smaller and faster.
Structured Sparsity and Weight-adaptive Pruning for Memory and Compute efficient Whisper models
Machine Learning (CS)
Makes speech recognition work on small devices.
Unfolding A Few Structures for The Many: Memory-Efficient Compression of Conformer and Speech Foundation Models
Sound
Makes speech recognition models smaller, faster.