Score: 0

BitTTS: Highly Compact Text-to-Speech Using 1.58-bit Quantization and Weight Indexing

Published: June 4, 2025 | arXiv ID: 2506.03515v1

By: Masaya Kawamura , Takuya Hasumi , Yuma Shirahata and more

Potential Business Impact:

Makes phones talk with tiny, smart programs.

Business Areas:
Text Analytics Data and Analytics, Software

This paper proposes a highly compact, lightweight text-to-speech (TTS) model for on-device applications. To reduce the model size, the proposed model introduces two techniques. First, we introduce quantization-aware training (QAT), which quantizes model parameters during training to as low as 1.58-bit. In this case, most of 32-bit model parameters are quantized to ternary values {-1, 0, 1}. Second, we propose a method named weight indexing. In this method, we save a group of 1.58-bit weights as a single int8 index. This allows for efficient storage of model parameters, even on hardware that treats values in units of 8-bit. Experimental results demonstrate that the proposed method achieved 83 % reduction in model size, while outperforming the baseline of similar model size without quantization in synthesis quality.

Page Count
5 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing