Speech Discrete Tokens or Continuous Features? A Comparative Analysis for Spoken Language Understanding in SpeechLLMs
By: Dingdong Wang , Junan Li , Mingyu Cui and more
Potential Business Impact:
Makes computers understand talking better than before.
With the rise of Speech Large Language Models (SpeechLLMs), two dominant approaches have emerged for speech processing: discrete tokens and continuous features. Each approach has demonstrated strong capabilities in audio-related processing tasks. However, the performance gap between these two paradigms has not been thoroughly explored. To address this gap, we present a fair comparison of self-supervised learning (SSL)-based discrete and continuous features under the same experimental settings. We evaluate their performance across six spoken language understanding-related tasks using both small and large-scale LLMs (Qwen1.5-0.5B and Llama3.1-8B). We further conduct in-depth analyses, including efficient comparison, SSL layer analysis, LLM layer analysis, and robustness comparison. Our findings reveal that continuous features generally outperform discrete tokens in various tasks. Each speech processing method exhibits distinct characteristics and patterns in how it learns and processes speech information. We hope our results will provide valuable insights to advance spoken language understanding in SpeechLLMs.
Similar Papers
Continuous-Token Diffusion for Speaker-Referenced TTS in Multimodal LLMs
Audio and Speech Processing
Makes computers talk like real people.
Continuous-Token Diffusion for Speaker-Referenced TTS in Multimodal LLMs
Audio and Speech Processing
Makes computers speak more like real people.
Recent Advances in Discrete Speech Tokens: A Review
Audio and Speech Processing
Makes computers understand and talk like humans.