Score: 1

Word stress in self-supervised speech models: A cross-linguistic comparison

Published: July 7, 2025 | arXiv ID: 2507.04738v1

By: Martijn Bentum, Louis ten Bosch, Tomas O. Lentz

Potential Business Impact:

Helps computers understand word stress in languages.

Business Areas:
Semantic Web Internet Services

In this paper we study word stress representations learned by self-supervised speech models (S3M), specifically the Wav2vec 2.0 model. We investigate the S3M representations of word stress for five different languages: Three languages with variable or lexical stress (Dutch, English and German) and two languages with fixed or demarcative stress (Hungarian and Polish). We train diagnostic stress classifiers on S3M embeddings and show that they can distinguish between stressed and unstressed syllables in read-aloud short sentences with high accuracy. We also tested language-specificity effects of S3M word stress. The results indicate that the word stress representations are language-specific, with a greater difference between the set of variable versus the set of fixed stressed languages.

Country of Origin
🇳🇱 Netherlands

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
Computation and Language