Adapting Speech Language Model to Singing Voice Synthesis
By: Yiwen Zhao , Jiatong Shi , Jinchuan Tian and more
Speech Language Models (SLMs) have recently emerged as a unified paradigm for addressing a wide range of speech-related tasks, including text-to-speech (TTS), speech enhancement (SE), and automatic speech recognition (ASR). However, the generalization capability of large-scale pre-trained SLMs remains underexplored. In this work, we adapt a 1.7B parameter TTS pretrained SLM for singing voice synthesis (SVS), using only a 135-hour synthetic singing corpus, ACE-Opencpop. Building upon the ESPNet-SpeechLM, our recipe involves the following procedure: (1) tokenization of music score conditions and singing waveforms, (2) multi-stream language model token prediction, (3) conditional flow matching-based mel-spectrogram generation. (4) a mel-to-wave vocoder. Experimental results demonstrate that our adapted SLM generalizes well to SVS and achieves performance comparable to leading discrete token-based SVS models.
Similar Papers
YingMusic-Singer: Zero-shot Singing Voice Synthesis and Editing with Annotation-free Melody Guidance
Sound
Makes computers sing any song with any words.
VSpeechLM: A Visual Speech Language Model for Visual Text-to-Speech Task
Multimedia
Makes videos talk with matching lip movements.
SingingSDS: A Singing-Capable Spoken Dialogue System for Conversational Roleplay Applications
Sound
Makes computer characters sing their answers.