Building Tailored Speech Recognizers for Japanese Speaking Assessment
By: Yotaro Kubo , Richard Sproat , Chihiro Taguchi and more
Potential Business Impact:
Helps computers understand Japanese speech better.
This paper presents methods for building speech recognizers tailored for Japanese speaking assessment tasks. Specifically, we build a speech recognizer that outputs phonemic labels with accent markers. Although Japanese is resource-rich, there is only a small amount of data for training models to produce accurate phonemic transcriptions that include accent marks. We propose two methods to mitigate data sparsity. First, a multitask training scheme introduces auxiliary loss functions to estimate orthographic text labels and pitch patterns of the input signal, so that utterances with only orthographic annotations can be leveraged in training. The second fuses two estimators, one over phonetic alphabet strings, and the other over text token sequences. To combine these estimates we develop an algorithm based on the finite-state transducer framework. Our results indicate that the use of multitask learning and fusion is effective for building an accurate phonemic recognizer. We show that this approach is advantageous compared to the use of generic multilingual recognizers. The relative advantages of the proposed methods were also compared. Our proposed methods reduced the average of mora-label error rates from 12.3% to 7.1% over the CSJ core evaluation sets.
Similar Papers
Transcript-Prompted Whisper with Dictionary-Enhanced Decoding for Japanese Speech Annotation
Computation and Language
Makes computer voices sound more natural.
Optimizing Multilingual Text-To-Speech with Accents & Emotions
Machine Learning (CS)
Makes computers speak with Indian accents and feelings.
A Practitioner's Guide to Building ASR Models for Low-Resource Languages: A Case Study on Scottish Gaelic
Computation and Language
Teaches computers to understand rare languages better.