Score: 0

Self-Improvement for Audio Large Language Model using Unlabeled Speech

Published: July 27, 2025 | arXiv ID: 2507.20169v1

By: Shaowen Wang, Xinyuan Chen, Yao Xu

Potential Business Impact:

Improves voice AI without needing new recordings.

Recent audio LLMs have emerged rapidly, demonstrating strong generalization across various speech tasks. However, given the inherent complexity of speech signals, these models inevitably suffer from performance degradation in specific target domains. To address this, we focus on enhancing audio LLMs in target domains without any labeled data. We propose a self-improvement method called SI-SDA, leveraging the information embedded in large-model decoding to evaluate the quality of generated pseudo labels and then perform domain adaptation based on reinforcement learning optimization. Experimental results show that our method consistently and significantly improves audio LLM performance, outperforming existing baselines in WER and BLEU across multiple public datasets of automatic speech recognition (ASR), spoken question-answering (SQA), and speech-to-text translation (S2TT). Furthermore, our approach exhibits high data efficiency, underscoring its potential for real-world deployment.

Page Count
6 pages

Category
Computer Science:
Sound