JoyTTS: LLM-based Spoken Chatbot With Voice Cloning
By: Fangru Zhou, Jun Zhao, Guoxin Wang
Potential Business Impact:
Makes computers talk like real people.
JoyTTS is an end-to-end spoken chatbot that combines large language models (LLM) with text-to-speech (TTS) technology, featuring voice cloning capabilities. This project is built upon the open-source MiniCPM-o and CosyVoice2 models and trained on 2000 hours of conversational data. We have also provided the complete training code to facilitate further development and optimization by the community. On the testing machine seed-tts-zh, it achieves a SS (speaker similarity) score of 0.73 and a WER (Word Error Rate) of 5.09. The code and models, along with training and inference scripts, are available at https://github.com/jdh-algo/JoyTTS.git.
Similar Papers
EmoVoice: LLM-based Emotional Text-To-Speech Model with Freestyle Text Prompting
Audio and Speech Processing
Makes talking robots sound happy or sad.
Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens
Sound
Makes computers talk with any voice, any style.
LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM
Computation and Language
Lets computers talk and understand like humans.