Efficient Scaling for LLM-based ASR
By: Bingshen Mu , Yiwen Shao , Kun Wei and more
Potential Business Impact:
Boosts speech-to-text accuracy with half the power
Large language model (LLM)-based automatic speech recognition (ASR) achieves strong performance but often incurs high computational costs. This work investigates how to obtain the best LLM-ASR performance efficiently. Through comprehensive and controlled experiments, we find that pretraining the speech encoder before integrating it with the LLM leads to significantly better scaling efficiency than the standard practice of joint post-training of LLM-ASR. Based on this insight, we propose a new multi-stage LLM-ASR training strategy, EFIN: Encoder First Integration. Among all training strategies evaluated, EFIN consistently delivers better performance (relative to 21.1% CERR) with significantly lower computation budgets (49.9% FLOPs). Furthermore, we derive a scaling law that approximates ASR error rates as a computation function, providing practical guidance for LLM-ASR scaling.
Similar Papers
FunAudio-ASR Technical Report
Computation and Language
Makes talking computers understand messy, noisy speech.
FunAudio-ASR Technical Report
Computation and Language
Makes talking computers understand messy, noisy speech.
Speech LLMs in Low-Resource Scenarios: Data Volume Requirements and the Impact of Pretraining on High-Resource Languages
Audio and Speech Processing
Helps computers understand quiet or rare languages.