A Unified Speech LLM for Diarization and Speech Recognition in Multilingual Conversations
By: Phurich Saengthong , Boonnithi Jiaramaneepinit , Sheng Li and more
Potential Business Impact:
Helps computers understand many languages spoken together.
Speech Large Language Models (Speech LLMs) have emerged as a crucial paradigm in recent years, extending the capabilities of traditional LLMs to speech tasks such as automatic speech recognition (ASR) and spoken dialogue modeling. However, their effectiveness in real-world multilingual conversations remains limited by the scarcity of data that captures natural conversational phenomena. To address this, the MLC-SLM Challenge provides a multilingual conversational dataset and evaluates models on two tasks: ASR with oracle segmentation (Task I) and joint diarization and recognition without oracle information (Task II). In this paper, we focus on Task II and propose a unified speech LLM that jointly performs diarization and ASR in an end-to-end manner. By reformulating the training data format and modifying the inference procedure, our model addresses the ambiguity inherent in pre-segmented audio and achieves a 54.87\% relative improvement in tcpWER/tcpCER over the baseline, ranking 8th overall, despite using a smaller LLM backbone. We also report results from Task I using a fine-tuned speech LLM.
Similar Papers
SpeechLLM: Unified Speech and Language Model for Enhanced Multi-Task Understanding in Low Resource Settings
Computation and Language
Lets computers understand spoken words for tasks.
The Eloquence team submission for task 1 of MLC-SLM challenge
Sound
Helps computers understand many languages spoken.
Bi-directional Context-Enhanced Speech Large Language Models for Multilingual Conversational ASR
Computation and Language
Makes talking computers understand many languages better.