Score: 0

Joint ASR and Speaker Role Tagging with Serialized Output Training

Published: June 12, 2025 | arXiv ID: 2506.10349v1

By: Anfeng Xu, Tiantian Feng, Shrikanth Narayanan

Potential Business Impact:

Lets computers know who is talking in a conversation.

Business Areas:
Speech Recognition Data and Analytics, Software

Automatic Speech Recognition systems have made significant progress with large-scale pre-trained models. However, most current systems focus solely on transcribing the speech without identifying speaker roles, a function that is critical for conversational AI. In this work, we investigate the use of serialized output training (SOT) for joint ASR and speaker role tagging. By augmenting Whisper with role-specific tokens and fine-tuning it with SOT, we enable the model to generate role-aware transcriptions in a single decoding pass. We compare the SOT approach against a self-supervised previous baseline method on two real-world conversational datasets. Our findings show that this approach achieves more than 10% reduction in multi-talker WER, demonstrating its feasibility as a unified model for speaker-role aware speech transcription.

Page Count
6 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing