Score: 1

SUTA-LM: Bridging Test-Time Adaptation and Language Model Rescoring for Robust ASR

Published: June 10, 2025 | arXiv ID: 2506.11121v1

By: Wei-Ping Huang, Guan-Ting Lin, Hung-yi Lee

Potential Business Impact:

Makes voice assistants understand messy speech better.

Business Areas:
Semantic Search Internet Services

Despite progress in end-to-end ASR, real-world domain mismatches still cause performance drops, which Test-Time Adaptation (TTA) aims to mitigate by adjusting models during inference. Recent work explores combining TTA with external language models, using techniques like beam search rescoring or generative error correction. In this work, we identify a previously overlooked challenge: TTA can interfere with language model rescoring, revealing the nontrivial nature of effectively combining the two methods. Based on this insight, we propose SUTA-LM, a simple yet effective extension of SUTA, an entropy-minimization-based TTA approach, with language model rescoring. SUTA-LM first applies a controlled adaptation process guided by an auto-step selection mechanism leveraging both acoustic and linguistic information, followed by language model rescoring to refine the outputs. Experiments on 18 diverse ASR datasets show that SUTA-LM achieves robust results across a wide range of domains.

Country of Origin
🇹🇼 Taiwan, Province of China


Page Count
7 pages

Category
Computer Science:
Computation and Language