BitTTS: Highly Compact Text-to-Speech Using 1.58-bit Quantization and Weight Indexing
By: Masaya Kawamura , Takuya Hasumi , Yuma Shirahata and more
Potential Business Impact:
Makes phones talk with tiny, smart programs.
This paper proposes a highly compact, lightweight text-to-speech (TTS) model for on-device applications. To reduce the model size, the proposed model introduces two techniques. First, we introduce quantization-aware training (QAT), which quantizes model parameters during training to as low as 1.58-bit. In this case, most of 32-bit model parameters are quantized to ternary values {-1, 0, 1}. Second, we propose a method named weight indexing. In this method, we save a group of 1.58-bit weights as a single int8 index. This allows for efficient storage of model parameters, even on hardware that treats values in units of 8-bit. Experimental results demonstrate that the proposed method achieved 83 % reduction in model size, while outperforming the baseline of similar model size without quantization in synthesis quality.
Similar Papers
Quantizing Whisper-small: How design choices affect ASR performance
Audio and Speech Processing
Shrinks AI speech models for phones.
Towards One-bit ASR: Extremely Low-bit Conformer Quantization Using Co-training and Stochastic Precision
Sound
Makes speech recognition smaller, faster, and cheaper.
Edge-ASR: Towards Low-Bit Quantization of Automatic Speech Recognition Models
Sound
Makes voice assistants work on small devices.