Score: 0

LauraTSE: Target Speaker Extraction using Auto-Regressive Decoder-Only Language Models

Published: April 10, 2025 | arXiv ID: 2504.07402v3

By: Beilong Tang, Bang Zeng, Ming Li

Potential Business Impact:

Separates voices from noisy recordings.

Business Areas:
Speech Recognition Data and Analytics, Software

We propose LauraTSE, an Auto-Regressive Decoder-Only Language Model for Target Speaker Extraction built upon the LauraGPT backbone. LauraTSE employs a small-scale auto-regressive decoder-only language model that generates the initial layers of the target speech's discrete codec representations from the continuous embeddings of both the mixture and reference speech. These outputs serve as coarse-grained predictions. To refine them, a one-step encoder-only language model reconstructs the full codec representation by integrating information from both the mixture and the reference speech, adding fine-grained details. Experimental results show that our approach can achieve promising performance. Additionally, we conduct ablation studies to investigate the data scalability and the contribution of the encoder-only model.

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)