Exploring the Effect of Segmentation and Vocabulary Size on Speech Tokenization for Speech Language Models
By: Shunsuke Kando, Yusuke Miyao, Shinnosuke Takamichi
Potential Business Impact:
Makes talking computers understand better, faster.
The purpose of speech tokenization is to transform a speech signal into a sequence of discrete representations, serving as the foundation for speech language models (SLMs). While speech tokenization has many options, their effect on the performance of SLMs remains unclear. This paper investigates two key aspects of speech tokenization: the segmentation width and the cluster size of discrete units. First, we segment speech signals into fixed/variable widths and pooled representations. We then train K-means models in multiple cluster sizes. Through the evaluation on zero-shot spoken language understanding benchmarks, we find the positive effect of moderately coarse segmentation and bigger cluster size. Notably, among the best-performing models, the most efficient one achieves a 50% reduction in training data and a 70% decrease in training runtime. Our analysis highlights the importance of combining multiple tokens to enhance fine-grained spoken language understanding.
Similar Papers
Speech Tokenizer is Key to Consistent Representation
Machine Learning (CS)
Makes computers understand talking better, even feelings.
An Empirical Analysis of Discrete Unit Representations in Speech Language Modeling Pre-training
Computation and Language
Teaches computers to understand spoken words better.
Recent Advances in Discrete Speech Tokens: A Review
Audio and Speech Processing
Makes computers understand and talk like humans.