Structured Sparsity and Weight-adaptive Pruning for Memory and Compute efficient Whisper models
By: Prasenjit K Mudi , Anshi Sachan , Dahlia Devapriya and more
Potential Business Impact:
Makes speech recognition work on small devices.
Whisper models have achieved remarkable progress in speech recognition; yet their large size remains a bottleneck for deployment on resource-constrained edge devices. This paper proposes a framework to design fine-tuned variants of Whisper which address the above problem. Structured sparsity is enforced via the Sparse Group LASSO penalty as a loss regularizer, to reduce the number of FLOating Point operations (FLOPs). Further, a weight statistics aware pruning algorithm is proposed. We also design our custom text normalizer for WER evaluation. On Common Voice 11.0 Hindi dataset, we obtain, without degrading WER, (a) 35.4% reduction in model parameters, 14.25% lower memory consumption and 18.5% fewer FLOPs on Whisper-small, and (b) 31% reduction in model parameters, 15.29% lower memory consumption and 16.95% fewer FLOPs on Whisper-medium; and, (c) substantially outperform the state-of-the-art Iterative Magnitude Pruning based method by pruning 18.7% more parameters along with a 12.31 reduction in WER.
Similar Papers
Adapting Whisper for Lightweight and Efficient Automatic Speech Recognition of Children for On-device Edge Applications
Audio and Speech Processing
Lets kids' voices work without sending data away.
Pruning as Regularization: Sensitivity-Aware One-Shot Pruning in ASR
Audio and Speech Processing
Makes voice assistants understand better and smaller.
BaldWhisper: Faster Whisper with Head Shearing and Layer Merging
Audio and Speech Processing
Makes voice assistants work better with fewer words.