Prosodic Structure Beyond Lexical Content: A Study of Self-Supervised Learning
By: Sarenne Wallbridge , Christoph Minixhofer , Catherine Lai and more
Potential Business Impact:
Helps computers understand emotions from how people talk.
People exploit the predictability of lexical structures during text comprehension. Though predictable structure is also present in speech, the degree to which prosody, e.g. intonation, tempo, and loudness, contributes to such structure independently of the lexical content is unclear. This study leverages self-supervised learning (SSL) to examine the temporal granularity of structures in the acoustic correlates of prosody. Representations from our proposed Masked Prosody Model can predict perceptual labels dependent on local information, such as word boundaries, but provide the most value for labels involving longer-term structures, like emotion recognition. Probing experiments across various perceptual labels show strong relative gains over untransformed pitch, energy, and voice activity features. Our results reveal the importance of SSL training objective timescale and highlight the value of complex SSL-encoded structures compared to more constrained classical structures.
Similar Papers
On the Contribution of Lexical Features to Speech Emotion Recognition
Audio and Speech Processing
Lets computers understand feelings from words spoken.
HuLA: Prosody-Aware Anti-Spoofing with Multi-Task Learning for Expressive and Emotional Synthetic Speech
Audio and Speech Processing
Catches fake voices by listening to how they sound.
Layer-wise Analysis for Quality of Multilingual Synthesized Speech
Audio and Speech Processing
Makes computer voices sound more human-like.