Speechless: Speech Instruction Training Without Speech for Low Resource Languages
By: Alan Dao , Dinh Bach Vu , Huy Hoang Ha and more
Potential Business Impact:
Teaches voice helpers to understand any language.
The rapid growth of voice assistants powered by large language models (LLM) has highlighted a need for speech instruction data to train these systems. Despite the abundance of speech recognition data, there is a notable scarcity of speech instruction data, which is essential for fine-tuning models to understand and execute spoken commands. Generating high-quality synthetic speech requires a good text-to-speech (TTS) model, which may not be available to low resource languages. Our novel approach addresses this challenge by halting synthesis at the semantic representation level, bypassing the need for TTS. We achieve this by aligning synthetic semantic representations with the pre-trained Whisper encoder, enabling an LLM to be fine-tuned on text instructions while maintaining the ability to understand spoken instructions during inference. This simplified training process is a promising approach to building voice assistant for low-resource languages.
Similar Papers
Unlocking Speech Instruction Data Potential with Query Rewriting
Artificial Intelligence
Makes computers understand and follow spoken instructions better.
TESU-LLM: Training Speech-LLMs Without Speech via Unified Encoder Alignment
Computation and Language
Teaches computers to understand speech without hearing it.
Empowering Global Voices: A Data-Efficient, Phoneme-Tone Adaptive Approach to High-Fidelity Speech Synthesis
Sound
Makes computers speak any language, even rare ones.