Score: 1

Benchmarking Prosody Encoding in Discrete Speech Tokens

Published: August 15, 2025 | arXiv ID: 2508.11224v1

By: Kentaro Onda , Satoru Fukayama , Daisuke Saito and more

Potential Business Impact:

Makes computers understand speech's emotion and tone.

Recently, discrete tokens derived from self-supervised learning (SSL) models via k-means clustering have been actively studied as pseudo-text in speech language models and as efficient intermediate representations for various tasks. However, these discrete tokens are typically learned in advance, separately from the training of language models or downstream tasks. As a result, choices related to discretization, such as the SSL model used or the number of clusters, must be made heuristically. In particular, speech language models are expected to understand and generate responses that reflect not only the semantic content but also prosodic features. Yet, there has been limited research on the ability of discrete tokens to capture prosodic information. To address this gap, this study conducts a comprehensive analysis focusing on prosodic encoding based on their sensitivity to the artificially modified prosody, aiming to provide practical guidelines for designing discrete tokens.

Country of Origin
🇯🇵 Japan


Page Count
8 pages

Category
Computer Science:
Sound