A2TTS: TTS for Low Resource Indian Languages
By: Ayush Singh Bhadoriya , Abhishek Nikunj Shinde , Isha Pandey and more
Potential Business Impact:
Makes computers speak like any person.
We present a speaker conditioned text-to-speech (TTS) system aimed at addressing challenges in generating speech for unseen speakers and supporting diverse Indian languages. Our method leverages a diffusion-based TTS architecture, where a speaker encoder extracts embeddings from short reference audio samples to condition the DDPM decoder for multispeaker generation. To further enhance prosody and naturalness, we employ a cross-attention based duration prediction mechanism that utilizes reference audio, enabling more accurate and speaker consistent timing. This results in speech that closely resembles the target speaker while improving duration modeling and overall expressiveness. Additionally, to improve zero-shot generation, we employed classifier free guidance, allowing the system to generate speech more near speech for unknown speakers. Using this approach, we trained language-specific speaker-conditioned models. Using the IndicSUPERB dataset for multiple Indian languages such as Bengali, Gujarati, Hindi, Marathi, Malayalam, Punjabi and Tamil.
Similar Papers
MahaTTS: A Unified Framework for Multilingual Text-to-Speech Synthesis
Audio and Speech Processing
Speaks many Indian languages like a person.
Optimizing Multilingual Text-To-Speech with Accents & Emotions
Machine Learning (CS)
Makes computers speak with Indian accents and feelings.
BnTTS: Few-Shot Speaker Adaptation in Low-Resource Setting
Computation and Language
Makes computers speak Bengali like a real person.