MahaTTS: A Unified Framework for Multilingual Text-to-Speech Synthesis
By: Jaskaran Singh , Amartya Roy Chowdhury , Raghav Prabhakar and more
Potential Business Impact:
Speaks many Indian languages like a person.
Current Text-to-Speech models pose a multilingual challenge, where most of the models traditionally focus on English and European languages, thereby hurting the potential to provide access to information to many more people. To address this gap, we introduce MahaTTS-v2 a Multilingual Multi-speaker Text-To-Speech (TTS) system that has excellent multilingual expressive capabilities in Indic languages. The model has been trained on around 20K hours of data specifically focused on Indian languages. Our approach leverages Wav2Vec2.0 tokens for semantic extraction, and a Language Model (LM) for text-to-semantic modeling. Additionally, we have used a Conditional Flow Model (CFM) for semantics to melspectogram generation. The experimental results indicate the effectiveness of the proposed approach over other frameworks. Our code is available at https://github.com/dubverse-ai/MahaTTSv2
Similar Papers
Optimizing Multilingual Text-To-Speech with Accents & Emotions
Machine Learning (CS)
Makes computers speak with Indian accents and feelings.
Breaking the Barriers of Text-Hungry and Audio-Deficient AI
Sound
Lets computers understand and speak any language.
Text to Speech System for Meitei Mayek Script
Computation and Language
Lets computers speak the Manipuri language.