Word stress in self-supervised speech models: A cross-linguistic comparison
By: Martijn Bentum, Louis ten Bosch, Tomas O. Lentz
Potential Business Impact:
Helps computers understand word stress in languages.
In this paper we study word stress representations learned by self-supervised speech models (S3M), specifically the Wav2vec 2.0 model. We investigate the S3M representations of word stress for five different languages: Three languages with variable or lexical stress (Dutch, English and German) and two languages with fixed or demarcative stress (Hungarian and Polish). We train diagnostic stress classifiers on S3M embeddings and show that they can distinguish between stressed and unstressed syllables in read-aloud short sentences with high accuracy. We also tested language-specificity effects of S3M word stress. The results indicate that the word stress representations are language-specific, with a greater difference between the set of variable versus the set of fixed stressed languages.
Similar Papers
StressTest: Can YOUR Speech LM Handle the Stress?
Computation and Language
Helps computers understand meaning from spoken emphasis.
Self-supervised learning of speech representations with Dutch archival data
Sound
Teaches computers to understand Dutch speech better.
Identifying Primary Stress Across Related Languages and Dialects with Transformer-based Speech Encoder Models
Audio and Speech Processing
Helps computers understand spoken stress in new languages.