Transcript-Prompted Whisper with Dictionary-Enhanced Decoding for Japanese Speech Annotation
By: Rui Hu , Xiaolong Lin , Jiawang Liu and more
Potential Business Impact:
Makes computer voices sound more natural.
In this paper, we propose a method for annotating phonemic and prosodic labels on a given audio-transcript pair, aimed at constructing Japanese text-to-speech (TTS) datasets. Our approach involves fine-tuning a large-scale pre-trained automatic speech recognition (ASR) model, conditioned on ground truth transcripts, to simultaneously output phrase-level graphemes and annotation labels. To further correct errors in phonemic labeling, we employ a decoding strategy that utilizes dictionary prior knowledge. The objective evaluation results demonstrate that our proposed method outperforms previous approaches relying solely on text or audio. The subjective evaluation results indicate that the naturalness of speech synthesized by the TTS model, trained with labels annotated using our method, is comparable to that of a model trained with manual annotations.
Similar Papers
Building Tailored Speech Recognizers for Japanese Speaking Assessment
Computation and Language
Helps computers understand Japanese speech better.
A Self-Refining Framework for Enhancing ASR Using TTS-Synthesized Data
Computation and Language
Makes voice assistants understand more words.
MixedG2P-T5: G2P-free Speech Synthesis for Mixed-script texts using Speech Self-Supervised Learning and Language Model
Audio and Speech Processing
Makes computers talk like real people.