Towards One-bit ASR: Extremely Low-bit Conformer Quantization Using Co-training and Stochastic Precision
By: Zhaoqing Li , Haoning Xu , Zengrui Jin and more
Potential Business Impact:
Makes speech recognition smaller, faster, and cheaper.
Model compression has become an emerging need as the sizes of modern speech systems rapidly increase. In this paper, we study model weight quantization, which directly reduces the memory footprint to accommodate computationally resource-constrained applications. We propose novel approaches to perform extremely low-bit (i.e., 2-bit and 1-bit) quantization of Conformer automatic speech recognition systems using multiple precision model co-training, stochastic precision, and tensor-wise learnable scaling factors to alleviate quantization incurred performance loss. The proposed methods can achieve performance-lossless 2-bit and 1-bit quantization of Conformer ASR systems trained with the 300-hr Switchboard and 960-hr LibriSpeech corpus. Maximum overall performance-lossless compression ratios of 16.2 and 16.6 times are achieved without a statistically significant increase in the word error rate (WER) over the full precision baseline systems, respectively.
Similar Papers
Edge-ASR: Towards Low-Bit Quantization of Automatic Speech Recognition Models
Sound
Makes voice assistants work on small devices.
Quantizing Whisper-small: How design choices affect ASR performance
Audio and Speech Processing
Shrinks AI speech models for phones.
BitTTS: Highly Compact Text-to-Speech Using 1.58-bit Quantization and Weight Indexing
Audio and Speech Processing
Makes phones talk with tiny, smart programs.