Score: 2

ZO-ASR: Zeroth-Order Fine-Tuning of Speech Foundation Models without Back-Propagation

Published: December 1, 2025 | arXiv ID: 2512.01267v1

By: Yuezhang Peng , Yuxin Liu , Yao Li and more

Potential Business Impact:

Makes speech recognition work with less computer power.

Business Areas:
Speech Recognition Data and Analytics, Software

Fine-tuning pre-trained speech foundation models for Automatic Speech Recognition (ASR) is prevalent, yet constrained by substantial GPU memory requirements. We introduce ZO-ASR, a memory-efficient Zeroth-Order (ZO) method that avoids Back-Propagation (BP) and activation memory by estimating gradients via forward passes. When combined with SGD optimizer, ZO-ASR-SGD fine-tunes ASR models using only inference memory. Our evaluation spans supervised and unsupervised tasks. For Supervised Domain Adaptation on Whisper-Large-V3, ZO-ASR's multiple query mechanism enhances robustness and achieves up to an 18.9\% relative Word Error Rate reduction over zero-shot baselines, outperforming existing ZO methods. For unsupervised Test-Time Adaptation on Wav2Vec2-Base, ZO-ASR exhibits moderately lower performance compared to first-order optimizer Adam. Our BP-free approach provides a viable solution for fine-tuning ASR models in computationally resource-constrained or gradient-inaccessible scenarios.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
7 pages

Category
Computer Science:
Multimedia