Benchmarking Prosody Encoding in Discrete Speech Tokens
By: Kentaro Onda , Satoru Fukayama , Daisuke Saito and more
Potential Business Impact:
Makes computers understand speech's emotion and tone.
Recently, discrete tokens derived from self-supervised learning (SSL) models via k-means clustering have been actively studied as pseudo-text in speech language models and as efficient intermediate representations for various tasks. However, these discrete tokens are typically learned in advance, separately from the training of language models or downstream tasks. As a result, choices related to discretization, such as the SSL model used or the number of clusters, must be made heuristically. In particular, speech language models are expected to understand and generate responses that reflect not only the semantic content but also prosodic features. Yet, there has been limited research on the ability of discrete tokens to capture prosodic information. To address this gap, this study conducts a comprehensive analysis focusing on prosodic encoding based on their sensitivity to the artificially modified prosody, aiming to provide practical guidelines for designing discrete tokens.
Similar Papers
Discrete Audio Tokens: More Than a Survey!
Sound
Makes computers understand sounds better.
Recent Advances in Discrete Speech Tokens: A Review
Audio and Speech Processing
Makes computers understand and talk like humans.
Speech Discrete Tokens or Continuous Features? A Comparative Analysis for Spoken Language Understanding in SpeechLLMs
Computation and Language
Makes computers understand talking better than before.