VocalNet-M2: Advancing Low-Latency Spoken Language Modeling via Integrated Multi-Codebook Tokenization and Multi-Token Prediction
By: Yuhao Wang , Ziyang Cheng , Heyang Liu and more
Potential Business Impact:
Speeds up talking computers, making them faster.
Current end-to-end spoken language models (SLMs) have made notable progress, yet they still encounter considerable response latency. This delay primarily arises from the autoregressive generation of speech tokens and the reliance on complex flow-matching models for speech synthesis. To overcome this, we introduce VocalNet-M2, a novel low-latency SLM that integrates a multi-codebook tokenizer and a multi-token prediction (MTP) strategy. Our model directly generates multi-codebook speech tokens, thus eliminating the need for a latency-inducing flow-matching model. Furthermore, our MTP strategy enhances generation efficiency and improves overall performance. Extensive experiments demonstrate that VocalNet-M2 achieves a substantial reduction in first chunk latency (from approximately 725ms to 350ms) while maintaining competitive performance across mainstream SLMs. This work also provides a comprehensive comparison of single-codebook and multi-codebook strategies, offering valuable insights for developing efficient and high-performance SLMs for real-time interactive applications.
Similar Papers
VocalNet: Speech LLM with Multi-Token Prediction for Faster and High-Quality Generation
Computation and Language
Makes computers talk and understand faster.
MTP-S2UT: Enhancing Speech-to-Speech Translation Quality with Multi-token Prediction
Computation and Language
Translates spoken words better by understanding more meaning.
Comprehend and Talk: Text to Speech Synthesis via Dual Language Modeling
Sound
Makes computer voices sound more natural and human.