Scheduled Interleaved Speech-Text Training for Speech-to-Speech Translation with LLMs
By: Hayato Futami , Emiru Tsunoo , Yosuke Kashiwagi and more
Potential Business Impact:
Translates languages by listening and speaking.
Speech-to-speech translation (S2ST) has been advanced with large language models (LLMs), which are fine-tuned on discrete speech units. In such approaches, modality adaptation from text to speech has been an issue. LLMs are trained on text-only data, which presents challenges to adapt them to speech modality with limited speech-to-speech data. To address the training difficulty, we propose scheduled interleaved speech--text training in this study. We use interleaved speech--text units instead of speech units during training, where aligned text tokens are interleaved at the word level. We gradually decrease the ratio of text as training progresses, to facilitate progressive modality adaptation from text to speech. We conduct experimental evaluations by fine-tuning LLaMA3.2-1B for S2ST on the CVSS dataset. We show that the proposed method consistently improves the translation performances, especially for languages with limited training data.
Similar Papers
SimulS2S-LLM: Unlocking Simultaneous Inference of Speech LLMs for Speech-to-Speech Translation
Computation and Language
Translates talking instantly, like a real-time interpreter.
Adaptive Inner Speech-Text Alignment for LLM-based Speech Translation
Computation and Language
Makes computers understand spoken words and translate them.
Latent Speech-Text Transformer
Computation and Language
Makes talking computers understand speech faster.