Benchmarking Expressive Japanese Character Text-to-Speech with VITS and Style-BERT-VITS2
By: Zackary Rackauckas, Julia Hirschberg
Potential Business Impact:
Makes computer voices sound like real people.
Synthesizing expressive Japanese character speech poses unique challenges due to pitch-accent sensitivity and stylistic variability. This paper benchmarks two open-source text-to-speech models--VITS and Style-BERT-VITS2 JP Extra (SBV2JE)--on in-domain, character-driven Japanese speech. Using three character-specific datasets, we evaluate models across naturalness (mean opinion and comparative mean opinion score), intelligibility (word error rate), and speaker consistency. SBV2JE matches human ground truth in naturalness (MOS 4.37 vs. 4.38), achieves lower WER, and shows slight preference in CMOS. Enhanced by pitch-accent controls and a WavLM-based discriminator, SBV2JE proves effective for applications like language learning and character dialogue generation, despite higher computational demands.
Similar Papers
VStyle: A Benchmark for Voice Style Adaptation with Spoken Instructions
Sound
Computers can change their voice when you ask.
VStyle: A Benchmark for Voice Style Adaptation with Spoken Instructions
Sound
Computers learn to change their voice on command.
The NTNU System at the S&I Challenge 2025 SLA Open Track
Computation and Language
Tests speaking skills better by combining sound and words.