Score: 0

Efficient Scaling for LLM-based ASR

Published: August 6, 2025 | arXiv ID: 2508.04096v1

By: Bingshen Mu , Yiwen Shao , Kun Wei and more

Potential Business Impact:

Boosts speech-to-text accuracy with half the power

Large language model (LLM)-based automatic speech recognition (ASR) achieves strong performance but often incurs high computational costs. This work investigates how to obtain the best LLM-ASR performance efficiently. Through comprehensive and controlled experiments, we find that pretraining the speech encoder before integrating it with the LLM leads to significantly better scaling efficiency than the standard practice of joint post-training of LLM-ASR. Based on this insight, we propose a new multi-stage LLM-ASR training strategy, EFIN: Encoder First Integration. Among all training strategies evaluated, EFIN consistently delivers better performance (relative to 21.1% CERR) with significantly lower computation budgets (49.9% FLOPs). Furthermore, we derive a scaling law that approximates ASR error rates as a computation function, providing practical guidance for LLM-ASR scaling.

Page Count
7 pages

Category
Computer Science:
Sound