Layer-wise Analysis for Quality of Multilingual Synthesized Speech
By: Erica Cooper , Takuma Okamoto , Yamato Ohtani and more
Potential Business Impact:
Makes computer voices sound more human-like.
While supervised quality predictors for synthesized speech have demonstrated strong correlations with human ratings, their requirement for in-domain labeled training data hinders their generalization ability to new domains. Unsupervised approaches based on pretrained self-supervised learning (SSL) based models and automatic speech recognition (ASR) models are a promising alternative; however, little is known about how these models encode information about speech quality. Towards the goal of better understanding how different aspects of speech quality are encoded in a multilingual setting, we present a layer-wise analysis of multilingual pretrained speech models based on reference modeling. We find that features extracted from early SSL layers show correlations with human ratings of synthesized speech, and later layers of ASR models can predict quality of non-neural systems as well as intelligibility. We also demonstrate the importance of using well-matched reference data.
Similar Papers
Selection of Layers from Self-supervised Learning Models for Predicting Mean-Opinion-Score of Speech
Audio and Speech Processing
Makes computers judge sound quality better.
Leveraging Audio-Visual Data to Reduce the Multilingual Gap in Self-Supervised Speech Models
Computation and Language
Helps computers understand many languages better.
An Effective Strategy for Modeling Score Ordinality and Non-uniform Intervals in Automated Speaking Assessment
Audio and Speech Processing
Helps computers judge how well people speak English.