Score: 1

Better Pseudo-labeling with Multi-ASR Fusion and Error Correction by SpeechLLM

Published: June 5, 2025 | arXiv ID: 2506.11089v1

By: Jeena Prakash , Blessingh Kumar , Kadri Hacioglu and more

Potential Business Impact:

Makes computers understand spoken words better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Automatic speech recognition (ASR) models rely on high-quality transcribed data for effective training. Generating pseudo-labels for large unlabeled audio datasets often relies on complex pipelines that combine multiple ASR outputs through multi-stage processing, leading to error propagation, information loss and disjoint optimization. We propose a unified multi-ASR prompt-driven framework using postprocessing by either textual or speech-based large language models (LLMs), replacing voting or other arbitration logic for reconciling the ensemble outputs. We perform a comparative study of multiple architectures with and without LLMs, showing significant improvements in transcription accuracy compared to traditional methods. Furthermore, we use the pseudo-labels generated by the various approaches to train semi-supervised ASR models for different datasets, again showing improved performance with textual and speechLLM transcriptions compared to baselines.

Repos / Data Links

Page Count
5 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing